00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 630 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3290 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.152 The recommended git tool is: git 00:00:00.153 using credential 00000000-0000-0000-0000-000000000002 00:00:00.155 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.195 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.605 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.616 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.628 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:07.628 > git config core.sparsecheckout # timeout=10 00:00:07.638 > git read-tree -mu HEAD # timeout=10 00:00:07.656 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:07.678 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:07.679 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:07.750 [Pipeline] Start of Pipeline 00:00:07.763 [Pipeline] library 00:00:07.765 Loading library shm_lib@master 00:00:07.765 Library shm_lib@master is cached. Copying from home. 00:00:07.781 [Pipeline] node 00:00:07.788 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.790 [Pipeline] { 00:00:07.798 [Pipeline] catchError 00:00:07.799 [Pipeline] { 00:00:07.811 [Pipeline] wrap 00:00:07.817 [Pipeline] { 00:00:07.823 [Pipeline] stage 00:00:07.824 [Pipeline] { (Prologue) 00:00:08.008 [Pipeline] sh 00:00:08.296 + logger -p user.info -t JENKINS-CI 00:00:08.314 [Pipeline] echo 00:00:08.315 Node: GP11 00:00:08.323 [Pipeline] sh 00:00:08.627 [Pipeline] setCustomBuildProperty 00:00:08.636 [Pipeline] echo 00:00:08.654 Cleanup processes 00:00:08.659 [Pipeline] sh 00:00:08.942 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.942 2107882 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.958 [Pipeline] sh 00:00:09.249 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.249 ++ grep -v 'sudo pgrep' 00:00:09.249 ++ awk '{print $1}' 00:00:09.249 + sudo kill -9 00:00:09.249 + true 00:00:09.266 [Pipeline] cleanWs 00:00:09.278 [WS-CLEANUP] Deleting project workspace... 00:00:09.278 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.285 [WS-CLEANUP] done 00:00:09.290 [Pipeline] setCustomBuildProperty 00:00:09.306 [Pipeline] sh 00:00:09.593 + sudo git config --global --replace-all safe.directory '*' 00:00:09.687 [Pipeline] httpRequest 00:00:09.719 [Pipeline] echo 00:00:09.721 Sorcerer 10.211.164.101 is alive 00:00:09.730 [Pipeline] httpRequest 00:00:09.735 HttpMethod: GET 00:00:09.735 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:09.737 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:09.757 Response Code: HTTP/1.1 200 OK 00:00:09.757 Success: Status code 200 is in the accepted range: 200,404 00:00:09.758 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:13.701 [Pipeline] sh 00:00:13.990 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:14.005 [Pipeline] httpRequest 00:00:14.039 [Pipeline] echo 00:00:14.041 Sorcerer 10.211.164.101 is alive 00:00:14.049 [Pipeline] httpRequest 00:00:14.054 HttpMethod: GET 00:00:14.054 URL: http://10.211.164.101/packages/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:14.055 Sending request to url: http://10.211.164.101/packages/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:14.069 Response Code: HTTP/1.1 200 OK 00:00:14.069 Success: Status code 200 is in the accepted range: 200,404 00:00:14.069 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:01:34.021 [Pipeline] sh 00:01:34.306 + tar --no-same-owner -xf spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:01:36.851 [Pipeline] sh 00:01:37.137 + git -C spdk log --oneline -n5 00:01:37.137 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:37.137 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:37.137 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:01:37.137 1d6dfcbeb nvme_pci: ctrlr_scan_attached callback 00:01:37.137 ff6594986 nvme_transport: optional callback to scan attached 00:01:37.155 [Pipeline] withCredentials 00:01:37.165 > git --version # timeout=10 00:01:37.175 > git --version # 'git version 2.39.2' 00:01:37.199 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:37.201 [Pipeline] { 00:01:37.209 [Pipeline] retry 00:01:37.211 [Pipeline] { 00:01:37.226 [Pipeline] sh 00:01:37.733 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:38.317 [Pipeline] } 00:01:38.339 [Pipeline] // retry 00:01:38.344 [Pipeline] } 00:01:38.364 [Pipeline] // withCredentials 00:01:38.373 [Pipeline] httpRequest 00:01:38.390 [Pipeline] echo 00:01:38.392 Sorcerer 10.211.164.101 is alive 00:01:38.400 [Pipeline] httpRequest 00:01:38.405 HttpMethod: GET 00:01:38.405 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.406 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.415 Response Code: HTTP/1.1 200 OK 00:01:38.415 Success: Status code 200 is in the accepted range: 200,404 00:01:38.416 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:46.290 [Pipeline] sh 00:01:46.574 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:48.488 [Pipeline] sh 00:01:48.772 + git -C dpdk log --oneline -n5 00:01:48.772 eeb0605f11 version: 23.11.0 00:01:48.772 238778122a doc: update release notes for 23.11 00:01:48.772 46aa6b3cfc doc: fix description of RSS features 00:01:48.772 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:48.772 7e421ae345 devtools: support skipping forbid rule check 00:01:48.784 [Pipeline] } 00:01:48.801 [Pipeline] // stage 00:01:48.811 [Pipeline] stage 00:01:48.813 [Pipeline] { (Prepare) 00:01:48.834 [Pipeline] writeFile 00:01:48.851 [Pipeline] sh 00:01:49.133 + logger -p user.info -t JENKINS-CI 00:01:49.146 [Pipeline] sh 00:01:49.430 + logger -p user.info -t JENKINS-CI 00:01:49.443 [Pipeline] sh 00:01:49.728 + cat autorun-spdk.conf 00:01:49.728 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.728 SPDK_TEST_NVMF=1 00:01:49.728 SPDK_TEST_NVME_CLI=1 00:01:49.728 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.728 SPDK_TEST_NVMF_NICS=e810 00:01:49.728 SPDK_TEST_VFIOUSER=1 00:01:49.728 SPDK_RUN_UBSAN=1 00:01:49.728 NET_TYPE=phy 00:01:49.728 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:49.728 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.736 RUN_NIGHTLY=1 00:01:49.741 [Pipeline] readFile 00:01:49.767 [Pipeline] withEnv 00:01:49.770 [Pipeline] { 00:01:49.784 [Pipeline] sh 00:01:50.070 + set -ex 00:01:50.070 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:50.070 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:50.070 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.070 ++ SPDK_TEST_NVMF=1 00:01:50.070 ++ SPDK_TEST_NVME_CLI=1 00:01:50.070 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.070 ++ SPDK_TEST_NVMF_NICS=e810 00:01:50.070 ++ SPDK_TEST_VFIOUSER=1 00:01:50.070 ++ SPDK_RUN_UBSAN=1 00:01:50.070 ++ NET_TYPE=phy 00:01:50.070 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:50.070 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.070 ++ RUN_NIGHTLY=1 00:01:50.070 + case $SPDK_TEST_NVMF_NICS in 00:01:50.070 + DRIVERS=ice 00:01:50.070 + [[ tcp == \r\d\m\a ]] 00:01:50.070 + [[ -n ice ]] 00:01:50.070 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:50.070 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:50.070 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:50.070 rmmod: ERROR: Module irdma is not currently loaded 00:01:50.070 rmmod: ERROR: Module i40iw is not currently loaded 00:01:50.070 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:50.070 + true 00:01:50.070 + for D in $DRIVERS 00:01:50.070 + sudo modprobe ice 00:01:50.070 + exit 0 00:01:50.080 [Pipeline] } 00:01:50.098 [Pipeline] // withEnv 00:01:50.104 [Pipeline] } 00:01:50.121 [Pipeline] // stage 00:01:50.131 [Pipeline] catchError 00:01:50.133 [Pipeline] { 00:01:50.148 [Pipeline] timeout 00:01:50.149 Timeout set to expire in 50 min 00:01:50.151 [Pipeline] { 00:01:50.167 [Pipeline] stage 00:01:50.169 [Pipeline] { (Tests) 00:01:50.184 [Pipeline] sh 00:01:50.470 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.470 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.470 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.470 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:50.470 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.470 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.470 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:50.470 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.470 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.470 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.470 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:50.470 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.470 + source /etc/os-release 00:01:50.470 ++ NAME='Fedora Linux' 00:01:50.470 ++ VERSION='38 (Cloud Edition)' 00:01:50.470 ++ ID=fedora 00:01:50.470 ++ VERSION_ID=38 00:01:50.470 ++ VERSION_CODENAME= 00:01:50.470 ++ PLATFORM_ID=platform:f38 00:01:50.470 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:50.470 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.470 ++ LOGO=fedora-logo-icon 00:01:50.470 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:50.470 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.470 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:50.470 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.470 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.470 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.470 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:50.470 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.470 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:50.470 ++ SUPPORT_END=2024-05-14 00:01:50.470 ++ VARIANT='Cloud Edition' 00:01:50.470 ++ VARIANT_ID=cloud 00:01:50.470 + uname -a 00:01:50.470 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:50.470 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:51.407 Hugepages 00:01:51.407 node hugesize free / total 00:01:51.407 node0 1048576kB 0 / 0 00:01:51.407 node0 2048kB 0 / 0 00:01:51.407 node1 1048576kB 0 / 0 00:01:51.407 node1 2048kB 0 / 0 00:01:51.407 00:01:51.407 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.407 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:51.407 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:51.407 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:51.407 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:51.666 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:51.666 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:51.666 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:51.666 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:51.666 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:51.666 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:51.666 + rm -f /tmp/spdk-ld-path 00:01:51.666 + source autorun-spdk.conf 00:01:51.666 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.666 ++ SPDK_TEST_NVMF=1 00:01:51.666 ++ SPDK_TEST_NVME_CLI=1 00:01:51.666 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.666 ++ SPDK_TEST_NVMF_NICS=e810 00:01:51.666 ++ SPDK_TEST_VFIOUSER=1 00:01:51.666 ++ SPDK_RUN_UBSAN=1 00:01:51.666 ++ NET_TYPE=phy 00:01:51.666 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:51.666 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.666 ++ RUN_NIGHTLY=1 00:01:51.666 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.666 + [[ -n '' ]] 00:01:51.666 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.666 + for M in /var/spdk/build-*-manifest.txt 00:01:51.666 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.666 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:51.666 + for M in /var/spdk/build-*-manifest.txt 00:01:51.666 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.666 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:51.666 ++ uname 00:01:51.666 + [[ Linux == \L\i\n\u\x ]] 00:01:51.666 + sudo dmesg -T 00:01:51.666 + sudo dmesg --clear 00:01:51.666 + dmesg_pid=2108597 00:01:51.666 + [[ Fedora Linux == FreeBSD ]] 00:01:51.666 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.666 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.666 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.666 + sudo dmesg -Tw 00:01:51.666 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.666 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.666 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.666 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.666 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.666 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.666 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.666 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.666 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.666 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.666 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.666 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:51.666 Test configuration: 00:01:51.666 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.666 SPDK_TEST_NVMF=1 00:01:51.666 SPDK_TEST_NVME_CLI=1 00:01:51.666 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.666 SPDK_TEST_NVMF_NICS=e810 00:01:51.666 SPDK_TEST_VFIOUSER=1 00:01:51.666 SPDK_RUN_UBSAN=1 00:01:51.666 NET_TYPE=phy 00:01:51.666 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:51.666 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.666 RUN_NIGHTLY=1 17:49:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:51.666 17:49:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.666 17:49:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.666 17:49:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.666 17:49:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.666 17:49:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.666 17:49:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.666 17:49:59 -- paths/export.sh@5 -- $ export PATH 00:01:51.666 17:49:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.666 17:49:59 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:51.666 17:49:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:51.666 17:49:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721749799.XXXXXX 00:01:51.666 17:49:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721749799.z8zDUb 00:01:51.666 17:49:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:51.666 17:49:59 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:01:51.666 17:49:59 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.666 17:49:59 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:51.666 17:49:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:51.666 17:49:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.666 17:49:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:51.666 17:49:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:51.666 17:49:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.666 17:49:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:51.666 17:49:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:51.666 17:49:59 -- pm/common@17 -- $ local monitor 00:01:51.666 17:49:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.666 17:49:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.666 17:49:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.666 17:49:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.666 17:49:59 -- pm/common@21 -- $ date +%s 00:01:51.666 17:49:59 -- pm/common@21 -- $ date +%s 00:01:51.666 17:49:59 -- pm/common@25 -- $ sleep 1 00:01:51.666 17:49:59 -- pm/common@21 -- $ date +%s 00:01:51.666 17:49:59 -- pm/common@21 -- $ date +%s 00:01:51.666 17:49:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721749799 00:01:51.666 17:49:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721749799 00:01:51.666 17:49:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721749799 00:01:51.666 17:49:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721749799 00:01:51.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721749799_collect-vmstat.pm.log 00:01:51.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721749799_collect-cpu-load.pm.log 00:01:51.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721749799_collect-cpu-temp.pm.log 00:01:51.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721749799_collect-bmc-pm.bmc.pm.log 00:01:52.864 17:50:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:52.864 17:50:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.864 17:50:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.864 17:50:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.864 17:50:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.864 Tue Jul 23 03:50:00 PM UTC 2024 00:01:52.864 17:50:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.864 v24.09-pre-302-gb8378f94e 00:01:52.864 17:50:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:52.864 17:50:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.864 17:50:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.864 17:50:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:52.864 17:50:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.864 17:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.864 ************************************ 00:01:52.864 START TEST ubsan 00:01:52.864 ************************************ 00:01:52.864 17:50:00 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:52.864 using ubsan 00:01:52.864 00:01:52.864 real 0m0.000s 00:01:52.864 user 0m0.000s 00:01:52.864 sys 0m0.000s 00:01:52.864 17:50:00 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:52.864 17:50:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.864 ************************************ 00:01:52.864 END TEST ubsan 00:01:52.864 ************************************ 00:01:52.864 17:50:00 -- common/autotest_common.sh@1142 -- $ return 0 00:01:52.864 17:50:00 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:52.864 17:50:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:52.864 17:50:00 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:52.864 17:50:00 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:52.864 17:50:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.864 17:50:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.864 ************************************ 00:01:52.864 START TEST build_native_dpdk 00:01:52.864 ************************************ 00:01:52.864 17:50:00 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:52.864 17:50:00 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:52.864 17:50:00 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:52.865 eeb0605f11 version: 23.11.0 00:01:52.865 238778122a doc: update release notes for 23.11 00:01:52.865 46aa6b3cfc doc: fix description of RSS features 00:01:52.865 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:52.865 7e421ae345 devtools: support skipping forbid rule check 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:52.865 patching file config/rte_config.h 00:01:52.865 Hunk #1 succeeded at 60 (offset 1 line). 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:52.865 17:50:00 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:52.865 patching file lib/pcapng/rte_pcapng.c 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:52.865 17:50:00 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.086 The Meson build system 00:01:57.086 Version: 1.3.1 00:01:57.086 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:57.086 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:57.086 Build type: native build 00:01:57.086 Program cat found: YES (/usr/bin/cat) 00:01:57.086 Project name: DPDK 00:01:57.086 Project version: 23.11.0 00:01:57.086 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.086 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:57.086 Host machine cpu family: x86_64 00:01:57.086 Host machine cpu: x86_64 00:01:57.086 Message: ## Building in Developer Mode ## 00:01:57.086 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.086 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:57.086 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.086 Program python3 found: YES (/usr/bin/python3) 00:01:57.086 Program cat found: YES (/usr/bin/cat) 00:01:57.086 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:57.086 Compiler for C supports arguments -march=native: YES 00:01:57.086 Checking for size of "void *" : 8 00:01:57.086 Checking for size of "void *" : 8 (cached) 00:01:57.086 Library m found: YES 00:01:57.086 Library numa found: YES 00:01:57.086 Has header "numaif.h" : YES 00:01:57.086 Library fdt found: NO 00:01:57.086 Library execinfo found: NO 00:01:57.086 Has header "execinfo.h" : YES 00:01:57.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.086 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.086 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.086 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.086 Run-time dependency openssl found: YES 3.0.9 00:01:57.086 Run-time dependency libpcap found: YES 1.10.4 00:01:57.086 Has header "pcap.h" with dependency libpcap: YES 00:01:57.086 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.086 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.086 Compiler for C supports arguments -Wformat: YES 00:01:57.086 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.086 Compiler for C supports arguments -Wformat-security: NO 00:01:57.086 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.086 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.086 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.086 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.086 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.086 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.086 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.086 Compiler for C supports arguments -Wundef: YES 00:01:57.086 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.086 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.086 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.086 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.086 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.086 Program objdump found: YES (/usr/bin/objdump) 00:01:57.086 Compiler for C supports arguments -mavx512f: YES 00:01:57.086 Checking if "AVX512 checking" compiles: YES 00:01:57.086 Fetching value of define "__SSE4_2__" : 1 00:01:57.086 Fetching value of define "__AES__" : 1 00:01:57.086 Fetching value of define "__AVX__" : 1 00:01:57.086 Fetching value of define "__AVX2__" : (undefined) 00:01:57.086 Fetching value of define "__AVX512BW__" : (undefined) 00:01:57.086 Fetching value of define "__AVX512CD__" : (undefined) 00:01:57.086 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:57.086 Fetching value of define "__AVX512F__" : (undefined) 00:01:57.086 Fetching value of define "__AVX512VL__" : (undefined) 00:01:57.086 Fetching value of define "__PCLMUL__" : 1 00:01:57.086 Fetching value of define "__RDRND__" : 1 00:01:57.086 Fetching value of define "__RDSEED__" : (undefined) 00:01:57.086 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.086 Fetching value of define "__znver1__" : (undefined) 00:01:57.086 Fetching value of define "__znver2__" : (undefined) 00:01:57.086 Fetching value of define "__znver3__" : (undefined) 00:01:57.086 Fetching value of define "__znver4__" : (undefined) 00:01:57.086 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.086 Message: lib/log: Defining dependency "log" 00:01:57.086 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.086 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.086 Checking for function "getentropy" : NO 00:01:57.086 Message: lib/eal: Defining dependency "eal" 00:01:57.086 Message: lib/ring: Defining dependency "ring" 00:01:57.086 Message: lib/rcu: Defining dependency "rcu" 00:01:57.086 Message: lib/mempool: Defining dependency "mempool" 00:01:57.086 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.086 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.086 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.086 Compiler for C supports arguments -mpclmul: YES 00:01:57.086 Compiler for C supports arguments -maes: YES 00:01:57.086 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.087 Compiler for C supports arguments -mavx512bw: YES 00:01:57.087 Compiler for C supports arguments -mavx512dq: YES 00:01:57.087 Compiler for C supports arguments -mavx512vl: YES 00:01:57.087 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.087 Compiler for C supports arguments -mavx2: YES 00:01:57.087 Compiler for C supports arguments -mavx: YES 00:01:57.087 Message: lib/net: Defining dependency "net" 00:01:57.087 Message: lib/meter: Defining dependency "meter" 00:01:57.087 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.087 Message: lib/pci: Defining dependency "pci" 00:01:57.087 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.087 Message: lib/metrics: Defining dependency "metrics" 00:01:57.087 Message: lib/hash: Defining dependency "hash" 00:01:57.087 Message: lib/timer: Defining dependency "timer" 00:01:57.087 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:57.087 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:57.087 Message: lib/acl: Defining dependency "acl" 00:01:57.087 Message: lib/bbdev: Defining dependency "bbdev" 00:01:57.087 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:57.087 Run-time dependency libelf found: YES 0.190 00:01:57.087 Message: lib/bpf: Defining dependency "bpf" 00:01:57.087 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:57.087 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.087 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.087 Message: lib/distributor: Defining dependency "distributor" 00:01:57.087 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.087 Message: lib/efd: Defining dependency "efd" 00:01:57.087 Message: lib/eventdev: Defining dependency "eventdev" 00:01:57.087 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:57.087 Message: lib/gpudev: Defining dependency "gpudev" 00:01:57.087 Message: lib/gro: Defining dependency "gro" 00:01:57.087 Message: lib/gso: Defining dependency "gso" 00:01:57.087 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:57.087 Message: lib/jobstats: Defining dependency "jobstats" 00:01:57.087 Message: lib/latencystats: Defining dependency "latencystats" 00:01:57.087 Message: lib/lpm: Defining dependency "lpm" 00:01:57.087 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:57.087 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:57.087 Message: lib/member: Defining dependency "member" 00:01:57.087 Message: lib/pcapng: Defining dependency "pcapng" 00:01:57.087 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.087 Message: lib/power: Defining dependency "power" 00:01:57.087 Message: lib/rawdev: Defining dependency "rawdev" 00:01:57.087 Message: lib/regexdev: Defining dependency "regexdev" 00:01:57.087 Message: lib/mldev: Defining dependency "mldev" 00:01:57.087 Message: lib/rib: Defining dependency "rib" 00:01:57.087 Message: lib/reorder: Defining dependency "reorder" 00:01:57.087 Message: lib/sched: Defining dependency "sched" 00:01:57.087 Message: lib/security: Defining dependency "security" 00:01:57.087 Message: lib/stack: Defining dependency "stack" 00:01:57.087 Has header "linux/userfaultfd.h" : YES 00:01:57.087 Has header "linux/vduse.h" : YES 00:01:57.087 Message: lib/vhost: Defining dependency "vhost" 00:01:57.087 Message: lib/ipsec: Defining dependency "ipsec" 00:01:57.087 Message: lib/pdcp: Defining dependency "pdcp" 00:01:57.087 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.087 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:57.087 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:57.087 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:57.087 Message: lib/fib: Defining dependency "fib" 00:01:57.087 Message: lib/port: Defining dependency "port" 00:01:57.087 Message: lib/pdump: Defining dependency "pdump" 00:01:57.087 Message: lib/table: Defining dependency "table" 00:01:57.087 Message: lib/pipeline: Defining dependency "pipeline" 00:01:57.087 Message: lib/graph: Defining dependency "graph" 00:01:57.087 Message: lib/node: Defining dependency "node" 00:01:58.469 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.469 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.469 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.469 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.469 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:58.469 Compiler for C supports arguments -Wno-unused-value: YES 00:01:58.469 Compiler for C supports arguments -Wno-format: YES 00:01:58.469 Compiler for C supports arguments -Wno-format-security: YES 00:01:58.469 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:58.469 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:58.469 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:58.469 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:58.469 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.469 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.469 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:58.469 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:58.469 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:58.469 Has header "sys/epoll.h" : YES 00:01:58.469 Program doxygen found: YES (/usr/bin/doxygen) 00:01:58.469 Configuring doxy-api-html.conf using configuration 00:01:58.469 Configuring doxy-api-man.conf using configuration 00:01:58.469 Program mandb found: YES (/usr/bin/mandb) 00:01:58.469 Program sphinx-build found: NO 00:01:58.469 Configuring rte_build_config.h using configuration 00:01:58.469 Message: 00:01:58.469 ================= 00:01:58.469 Applications Enabled 00:01:58.469 ================= 00:01:58.469 00:01:58.469 apps: 00:01:58.469 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:58.469 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:58.469 test-pmd, test-regex, test-sad, test-security-perf, 00:01:58.469 00:01:58.469 Message: 00:01:58.469 ================= 00:01:58.469 Libraries Enabled 00:01:58.469 ================= 00:01:58.469 00:01:58.469 libs: 00:01:58.469 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.469 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:58.469 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:58.469 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:58.469 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:58.469 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:58.469 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:58.469 00:01:58.469 00:01:58.469 Message: 00:01:58.469 =============== 00:01:58.469 Drivers Enabled 00:01:58.469 =============== 00:01:58.469 00:01:58.469 common: 00:01:58.469 00:01:58.469 bus: 00:01:58.469 pci, vdev, 00:01:58.469 mempool: 00:01:58.469 ring, 00:01:58.469 dma: 00:01:58.469 00:01:58.469 net: 00:01:58.469 i40e, 00:01:58.469 raw: 00:01:58.469 00:01:58.469 crypto: 00:01:58.469 00:01:58.469 compress: 00:01:58.469 00:01:58.469 regex: 00:01:58.469 00:01:58.469 ml: 00:01:58.469 00:01:58.469 vdpa: 00:01:58.469 00:01:58.469 event: 00:01:58.469 00:01:58.469 baseband: 00:01:58.469 00:01:58.469 gpu: 00:01:58.469 00:01:58.469 00:01:58.469 Message: 00:01:58.469 ================= 00:01:58.469 Content Skipped 00:01:58.469 ================= 00:01:58.469 00:01:58.469 apps: 00:01:58.469 00:01:58.469 libs: 00:01:58.469 00:01:58.469 drivers: 00:01:58.469 common/cpt: not in enabled drivers build config 00:01:58.469 common/dpaax: not in enabled drivers build config 00:01:58.469 common/iavf: not in enabled drivers build config 00:01:58.469 common/idpf: not in enabled drivers build config 00:01:58.469 common/mvep: not in enabled drivers build config 00:01:58.469 common/octeontx: not in enabled drivers build config 00:01:58.469 bus/auxiliary: not in enabled drivers build config 00:01:58.469 bus/cdx: not in enabled drivers build config 00:01:58.469 bus/dpaa: not in enabled drivers build config 00:01:58.469 bus/fslmc: not in enabled drivers build config 00:01:58.469 bus/ifpga: not in enabled drivers build config 00:01:58.469 bus/platform: not in enabled drivers build config 00:01:58.469 bus/vmbus: not in enabled drivers build config 00:01:58.469 common/cnxk: not in enabled drivers build config 00:01:58.469 common/mlx5: not in enabled drivers build config 00:01:58.469 common/nfp: not in enabled drivers build config 00:01:58.469 common/qat: not in enabled drivers build config 00:01:58.469 common/sfc_efx: not in enabled drivers build config 00:01:58.469 mempool/bucket: not in enabled drivers build config 00:01:58.469 mempool/cnxk: not in enabled drivers build config 00:01:58.469 mempool/dpaa: not in enabled drivers build config 00:01:58.469 mempool/dpaa2: not in enabled drivers build config 00:01:58.469 mempool/octeontx: not in enabled drivers build config 00:01:58.469 mempool/stack: not in enabled drivers build config 00:01:58.469 dma/cnxk: not in enabled drivers build config 00:01:58.469 dma/dpaa: not in enabled drivers build config 00:01:58.469 dma/dpaa2: not in enabled drivers build config 00:01:58.469 dma/hisilicon: not in enabled drivers build config 00:01:58.469 dma/idxd: not in enabled drivers build config 00:01:58.469 dma/ioat: not in enabled drivers build config 00:01:58.469 dma/skeleton: not in enabled drivers build config 00:01:58.469 net/af_packet: not in enabled drivers build config 00:01:58.469 net/af_xdp: not in enabled drivers build config 00:01:58.469 net/ark: not in enabled drivers build config 00:01:58.469 net/atlantic: not in enabled drivers build config 00:01:58.469 net/avp: not in enabled drivers build config 00:01:58.469 net/axgbe: not in enabled drivers build config 00:01:58.469 net/bnx2x: not in enabled drivers build config 00:01:58.469 net/bnxt: not in enabled drivers build config 00:01:58.469 net/bonding: not in enabled drivers build config 00:01:58.469 net/cnxk: not in enabled drivers build config 00:01:58.469 net/cpfl: not in enabled drivers build config 00:01:58.469 net/cxgbe: not in enabled drivers build config 00:01:58.469 net/dpaa: not in enabled drivers build config 00:01:58.469 net/dpaa2: not in enabled drivers build config 00:01:58.469 net/e1000: not in enabled drivers build config 00:01:58.469 net/ena: not in enabled drivers build config 00:01:58.469 net/enetc: not in enabled drivers build config 00:01:58.469 net/enetfec: not in enabled drivers build config 00:01:58.469 net/enic: not in enabled drivers build config 00:01:58.469 net/failsafe: not in enabled drivers build config 00:01:58.469 net/fm10k: not in enabled drivers build config 00:01:58.469 net/gve: not in enabled drivers build config 00:01:58.469 net/hinic: not in enabled drivers build config 00:01:58.469 net/hns3: not in enabled drivers build config 00:01:58.469 net/iavf: not in enabled drivers build config 00:01:58.469 net/ice: not in enabled drivers build config 00:01:58.469 net/idpf: not in enabled drivers build config 00:01:58.469 net/igc: not in enabled drivers build config 00:01:58.469 net/ionic: not in enabled drivers build config 00:01:58.469 net/ipn3ke: not in enabled drivers build config 00:01:58.470 net/ixgbe: not in enabled drivers build config 00:01:58.470 net/mana: not in enabled drivers build config 00:01:58.470 net/memif: not in enabled drivers build config 00:01:58.470 net/mlx4: not in enabled drivers build config 00:01:58.470 net/mlx5: not in enabled drivers build config 00:01:58.470 net/mvneta: not in enabled drivers build config 00:01:58.470 net/mvpp2: not in enabled drivers build config 00:01:58.470 net/netvsc: not in enabled drivers build config 00:01:58.470 net/nfb: not in enabled drivers build config 00:01:58.470 net/nfp: not in enabled drivers build config 00:01:58.470 net/ngbe: not in enabled drivers build config 00:01:58.470 net/null: not in enabled drivers build config 00:01:58.470 net/octeontx: not in enabled drivers build config 00:01:58.470 net/octeon_ep: not in enabled drivers build config 00:01:58.470 net/pcap: not in enabled drivers build config 00:01:58.470 net/pfe: not in enabled drivers build config 00:01:58.470 net/qede: not in enabled drivers build config 00:01:58.470 net/ring: not in enabled drivers build config 00:01:58.470 net/sfc: not in enabled drivers build config 00:01:58.470 net/softnic: not in enabled drivers build config 00:01:58.470 net/tap: not in enabled drivers build config 00:01:58.470 net/thunderx: not in enabled drivers build config 00:01:58.470 net/txgbe: not in enabled drivers build config 00:01:58.470 net/vdev_netvsc: not in enabled drivers build config 00:01:58.470 net/vhost: not in enabled drivers build config 00:01:58.470 net/virtio: not in enabled drivers build config 00:01:58.470 net/vmxnet3: not in enabled drivers build config 00:01:58.470 raw/cnxk_bphy: not in enabled drivers build config 00:01:58.470 raw/cnxk_gpio: not in enabled drivers build config 00:01:58.470 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:58.470 raw/ifpga: not in enabled drivers build config 00:01:58.470 raw/ntb: not in enabled drivers build config 00:01:58.470 raw/skeleton: not in enabled drivers build config 00:01:58.470 crypto/armv8: not in enabled drivers build config 00:01:58.470 crypto/bcmfs: not in enabled drivers build config 00:01:58.470 crypto/caam_jr: not in enabled drivers build config 00:01:58.470 crypto/ccp: not in enabled drivers build config 00:01:58.470 crypto/cnxk: not in enabled drivers build config 00:01:58.470 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.470 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.470 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.470 crypto/mlx5: not in enabled drivers build config 00:01:58.470 crypto/mvsam: not in enabled drivers build config 00:01:58.470 crypto/nitrox: not in enabled drivers build config 00:01:58.470 crypto/null: not in enabled drivers build config 00:01:58.470 crypto/octeontx: not in enabled drivers build config 00:01:58.470 crypto/openssl: not in enabled drivers build config 00:01:58.470 crypto/scheduler: not in enabled drivers build config 00:01:58.470 crypto/uadk: not in enabled drivers build config 00:01:58.470 crypto/virtio: not in enabled drivers build config 00:01:58.470 compress/isal: not in enabled drivers build config 00:01:58.470 compress/mlx5: not in enabled drivers build config 00:01:58.470 compress/octeontx: not in enabled drivers build config 00:01:58.470 compress/zlib: not in enabled drivers build config 00:01:58.470 regex/mlx5: not in enabled drivers build config 00:01:58.470 regex/cn9k: not in enabled drivers build config 00:01:58.470 ml/cnxk: not in enabled drivers build config 00:01:58.470 vdpa/ifc: not in enabled drivers build config 00:01:58.470 vdpa/mlx5: not in enabled drivers build config 00:01:58.470 vdpa/nfp: not in enabled drivers build config 00:01:58.470 vdpa/sfc: not in enabled drivers build config 00:01:58.470 event/cnxk: not in enabled drivers build config 00:01:58.470 event/dlb2: not in enabled drivers build config 00:01:58.470 event/dpaa: not in enabled drivers build config 00:01:58.470 event/dpaa2: not in enabled drivers build config 00:01:58.470 event/dsw: not in enabled drivers build config 00:01:58.470 event/opdl: not in enabled drivers build config 00:01:58.470 event/skeleton: not in enabled drivers build config 00:01:58.470 event/sw: not in enabled drivers build config 00:01:58.470 event/octeontx: not in enabled drivers build config 00:01:58.470 baseband/acc: not in enabled drivers build config 00:01:58.470 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:58.470 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:58.470 baseband/la12xx: not in enabled drivers build config 00:01:58.470 baseband/null: not in enabled drivers build config 00:01:58.470 baseband/turbo_sw: not in enabled drivers build config 00:01:58.470 gpu/cuda: not in enabled drivers build config 00:01:58.470 00:01:58.470 00:01:58.470 Build targets in project: 220 00:01:58.470 00:01:58.470 DPDK 23.11.0 00:01:58.470 00:01:58.470 User defined options 00:01:58.470 libdir : lib 00:01:58.470 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:58.470 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:58.470 c_link_args : 00:01:58.470 enable_docs : false 00:01:58.470 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:58.470 enable_kmods : false 00:01:58.470 machine : native 00:01:58.470 tests : false 00:01:58.470 00:01:58.470 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.470 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:58.735 17:50:06 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:58.735 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:58.735 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.735 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.735 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.735 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.735 [5/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.735 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.735 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.994 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.994 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.994 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.994 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.994 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.994 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.994 [14/710] Linking static target lib/librte_kvargs.a 00:01:58.994 [15/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.994 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.994 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.994 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.994 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.994 [20/710] Linking static target lib/librte_log.a 00:01:59.256 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.256 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.828 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.828 [24/710] Linking target lib/librte_log.so.24.0 00:01:59.828 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.828 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.828 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.828 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.828 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.828 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.828 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.828 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.828 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.090 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.090 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.090 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.090 [37/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.090 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.090 [39/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.090 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.090 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.090 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.090 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.090 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.090 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.090 [46/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:00.090 [47/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.090 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.090 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.090 [50/710] Linking target lib/librte_kvargs.so.24.0 00:02:00.090 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.090 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.090 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.090 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.090 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.090 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.090 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.090 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.090 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.090 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.090 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.090 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.352 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.352 [64/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:00.352 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.352 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.615 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.615 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.615 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.615 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.615 [71/710] Linking static target lib/librte_pci.a 00:02:00.615 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.875 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.875 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.875 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.875 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.875 [77/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.875 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.875 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.875 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.137 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.137 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.137 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.137 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.137 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.137 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.137 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.137 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.137 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.137 [90/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.137 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.137 [92/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.137 [93/710] Linking static target lib/librte_ring.a 00:02:01.137 [94/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.137 [95/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.137 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.404 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.404 [98/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.404 [99/710] Linking static target lib/librte_meter.a 00:02:01.404 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.404 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.404 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.404 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.404 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.404 [105/710] Linking static target lib/librte_telemetry.a 00:02:01.404 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.404 [107/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.404 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.404 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.404 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.666 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.666 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.666 [113/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.666 [114/710] Linking static target lib/librte_eal.a 00:02:01.666 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.666 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.666 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.666 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.666 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.666 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.925 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.925 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.925 [123/710] Linking static target lib/librte_net.a 00:02:01.925 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.925 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.925 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.925 [127/710] Linking static target lib/librte_cmdline.a 00:02:01.925 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.925 [129/710] Linking static target lib/librte_mempool.a 00:02:02.189 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.189 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.189 [132/710] Linking target lib/librte_telemetry.so.24.0 00:02:02.189 [133/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.189 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:02.189 [135/710] Linking static target lib/librte_cfgfile.a 00:02:02.189 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.189 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:02.189 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:02.449 [139/710] Linking static target lib/librte_metrics.a 00:02:02.449 [140/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.449 [141/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:02.449 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.449 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:02.449 [144/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:02.449 [145/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:02.712 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.712 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:02.712 [148/710] Linking static target lib/librte_rcu.a 00:02:02.712 [149/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:02.712 [150/710] Linking static target lib/librte_bitratestats.a 00:02:02.712 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:02.712 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:02.980 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.980 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.980 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:02.980 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.980 [157/710] Linking static target lib/librte_timer.a 00:02:02.980 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:02.980 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:02.980 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.980 [161/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.980 [162/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.980 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.980 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.980 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.241 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:03.241 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:03.241 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:03.241 [169/710] Linking static target lib/librte_bbdev.a 00:02:03.241 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.241 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.504 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.504 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:03.504 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.504 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.504 [176/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.504 [177/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:03.504 [178/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.504 [179/710] Linking static target lib/librte_compressdev.a 00:02:03.504 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:03.764 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:04.022 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.022 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:04.022 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:04.022 [185/710] Linking static target lib/librte_distributor.a 00:02:04.022 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:04.281 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.281 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.281 [189/710] Linking static target lib/librte_dmadev.a 00:02:04.281 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:04.281 [191/710] Linking static target lib/librte_bpf.a 00:02:04.281 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:04.281 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.281 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:04.281 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:04.281 [196/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.544 [197/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:04.544 [198/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:04.544 [199/710] Linking static target lib/librte_dispatcher.a 00:02:04.544 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:04.544 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:04.544 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:04.544 [203/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.544 [204/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:04.544 [205/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:04.808 [206/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:04.808 [207/710] Linking static target lib/librte_gpudev.a 00:02:04.808 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.808 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:04.808 [210/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:04.808 [211/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.808 [212/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.808 [213/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:04.808 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:04.808 [215/710] Linking static target lib/librte_gro.a 00:02:04.808 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.808 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:04.808 [218/710] Linking static target lib/librte_jobstats.a 00:02:05.077 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:05.077 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:05.077 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:05.077 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.335 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.335 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:05.335 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:05.335 [226/710] Linking static target lib/librte_latencystats.a 00:02:05.335 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.335 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:05.335 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:05.335 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:05.335 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:05.596 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:05.596 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:05.596 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:05.596 [235/710] Linking static target lib/librte_ip_frag.a 00:02:05.596 [236/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:05.596 [237/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:05.596 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.864 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:05.864 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.864 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.864 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:05.864 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.129 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.129 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.129 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:06.129 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:06.129 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:06.129 [249/710] Linking static target lib/librte_gso.a 00:02:06.387 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:06.387 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:06.387 [252/710] Linking static target lib/librte_regexdev.a 00:02:06.387 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:06.387 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:06.387 [255/710] Linking static target lib/librte_rawdev.a 00:02:06.387 [256/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:06.387 [257/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.387 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:06.387 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:06.652 [260/710] Linking static target lib/librte_mldev.a 00:02:06.652 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:06.652 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:06.652 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:06.652 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:06.652 [265/710] Linking static target lib/librte_efd.a 00:02:06.652 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:06.652 [267/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:06.652 [268/710] Linking static target lib/librte_pcapng.a 00:02:06.652 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:06.916 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:06.916 [271/710] Linking static target lib/librte_stack.a 00:02:06.916 [272/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:06.916 [273/710] Linking static target lib/acl/libavx2_tmp.a 00:02:06.916 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:06.916 [275/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:06.916 [276/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:06.916 [277/710] Linking static target lib/librte_lpm.a 00:02:06.917 [278/710] Linking static target lib/librte_hash.a 00:02:06.917 [279/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:06.917 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:06.917 [281/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.176 [282/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.176 [283/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.176 [284/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.176 [285/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:07.176 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.176 [287/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.176 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.176 [289/710] Linking static target lib/librte_reorder.a 00:02:07.440 [290/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.440 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.440 [292/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:07.440 [293/710] Linking static target lib/acl/libavx512_tmp.a 00:02:07.440 [294/710] Linking static target lib/librte_acl.a 00:02:07.440 [295/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.440 [296/710] Linking static target lib/librte_power.a 00:02:07.440 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.440 [298/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.440 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.440 [300/710] Linking static target lib/librte_security.a 00:02:07.704 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.704 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.704 [303/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.704 [304/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:07.704 [305/710] Linking static target lib/librte_mbuf.a 00:02:07.704 [306/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.704 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.704 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.964 [309/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:07.964 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:07.964 [311/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:07.964 [312/710] Linking static target lib/librte_rib.a 00:02:07.964 [313/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.964 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:07.964 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:07.964 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:07.964 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:08.225 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.225 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:08.225 [320/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:08.225 [321/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:08.225 [322/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:08.225 [323/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:08.225 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:08.225 [325/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:08.487 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.487 [327/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.487 [328/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.487 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.487 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:08.487 [331/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.749 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:08.749 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:09.011 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:09.011 [335/710] Linking static target lib/librte_eventdev.a 00:02:09.011 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:09.011 [337/710] Linking static target lib/librte_member.a 00:02:09.271 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:09.271 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.271 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.271 [341/710] Linking static target lib/librte_cryptodev.a 00:02:09.271 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:09.271 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:09.531 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:09.531 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:09.531 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:09.531 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:09.531 [348/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:09.531 [349/710] Linking static target lib/librte_sched.a 00:02:09.531 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:09.531 [351/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:09.531 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:09.531 [353/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.531 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:09.531 [355/710] Linking static target lib/librte_ethdev.a 00:02:09.531 [356/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:09.531 [357/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:09.531 [358/710] Linking static target lib/librte_fib.a 00:02:09.531 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.794 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:09.794 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:09.794 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:09.794 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:09.794 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:09.794 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:10.058 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:10.058 [367/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:10.058 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.058 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:10.058 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.058 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.321 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:10.321 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:10.582 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:10.582 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:10.582 [376/710] Linking static target lib/librte_pdump.a 00:02:10.582 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:10.582 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:10.582 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:10.582 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:10.847 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.847 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:10.847 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:10.847 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:10.847 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:10.847 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:10.847 [387/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:10.847 [388/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:10.847 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.108 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:11.108 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:11.108 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:11.108 [393/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:11.108 [394/710] Linking static target lib/librte_ipsec.a 00:02:11.108 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:11.108 [396/710] Linking static target lib/librte_table.a 00:02:11.366 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:11.367 [398/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.367 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:11.630 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:11.630 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:11.630 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.895 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:11.895 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:11.895 [405/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.158 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:12.158 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:12.158 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.158 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.159 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.159 [411/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:12.159 [412/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:12.159 [413/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.159 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.422 [415/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.422 [416/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.422 [417/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.422 [418/710] Linking target lib/librte_eal.so.24.0 00:02:12.422 [419/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:12.422 [420/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.422 [421/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.685 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.685 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.685 [424/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:12.685 [425/710] Linking static target drivers/librte_bus_vdev.a 00:02:12.685 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:12.685 [427/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:12.685 [428/710] Linking target lib/librte_ring.so.24.0 00:02:12.685 [429/710] Linking target lib/librte_meter.so.24.0 00:02:12.685 [430/710] Linking target lib/librte_pci.so.24.0 00:02:12.685 [431/710] Linking target lib/librte_timer.so.24.0 00:02:12.969 [432/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:12.969 [433/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:12.969 [434/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:12.969 [435/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.969 [436/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:12.969 [437/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:12.969 [438/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:12.969 [439/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:12.969 [440/710] Linking target lib/librte_acl.so.24.0 00:02:12.969 [441/710] Linking target lib/librte_rcu.so.24.0 00:02:12.969 [442/710] Linking target lib/librte_cfgfile.so.24.0 00:02:12.969 [443/710] Linking target lib/librte_mempool.so.24.0 00:02:12.969 [444/710] Linking target lib/librte_dmadev.so.24.0 00:02:13.274 [445/710] Linking target lib/librte_rawdev.so.24.0 00:02:13.274 [446/710] Linking target lib/librte_jobstats.so.24.0 00:02:13.274 [447/710] Linking static target lib/librte_port.a 00:02:13.274 [448/710] Linking target lib/librte_stack.so.24.0 00:02:13.274 [449/710] Linking static target lib/librte_graph.a 00:02:13.274 [450/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.274 [451/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.274 [452/710] Linking static target drivers/librte_bus_pci.a 00:02:13.274 [453/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:13.274 [454/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.274 [455/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:13.274 [456/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:13.274 [457/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:13.274 [458/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:13.274 [459/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:13.274 [460/710] Linking target lib/librte_mbuf.so.24.0 00:02:13.545 [461/710] Linking target lib/librte_rib.so.24.0 00:02:13.545 [462/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:13.545 [463/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.545 [464/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.545 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:13.545 [466/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.545 [467/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:13.545 [468/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:13.545 [469/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:13.545 [470/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:13.545 [471/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:13.813 [472/710] Linking target lib/librte_bbdev.so.24.0 00:02:13.813 [473/710] Linking target lib/librte_net.so.24.0 00:02:13.813 [474/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:13.813 [475/710] Linking target lib/librte_compressdev.so.24.0 00:02:13.813 [476/710] Linking target lib/librte_gpudev.so.24.0 00:02:13.813 [477/710] Linking target lib/librte_distributor.so.24.0 00:02:13.813 [478/710] Linking target lib/librte_cryptodev.so.24.0 00:02:13.813 [479/710] Linking target lib/librte_regexdev.so.24.0 00:02:13.813 [480/710] Linking target lib/librte_mldev.so.24.0 00:02:13.814 [481/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.814 [482/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:13.814 [483/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.814 [484/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.814 [485/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.075 [486/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:14.075 [487/710] Linking target lib/librte_reorder.so.24.0 00:02:14.075 [488/710] Linking static target drivers/librte_mempool_ring.a 00:02:14.075 [489/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:14.075 [490/710] Linking target lib/librte_sched.so.24.0 00:02:14.075 [491/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.075 [492/710] Linking target lib/librte_fib.so.24.0 00:02:14.075 [493/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:14.075 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:14.075 [495/710] Linking target lib/librte_hash.so.24.0 00:02:14.075 [496/710] Linking target lib/librte_cmdline.so.24.0 00:02:14.075 [497/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:14.075 [498/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:14.075 [499/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.075 [500/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:14.075 [501/710] Linking target lib/librte_security.so.24.0 00:02:14.075 [502/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:14.075 [503/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:14.075 [504/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:14.075 [505/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:14.075 [506/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:14.338 [507/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:14.338 [508/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:14.338 [509/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:14.338 [510/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:14.338 [511/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:14.338 [512/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:14.338 [513/710] Linking target lib/librte_efd.so.24.0 00:02:14.338 [514/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:14.338 [515/710] Linking target lib/librte_lpm.so.24.0 00:02:14.603 [516/710] Linking target lib/librte_member.so.24.0 00:02:14.603 [517/710] Linking target lib/librte_ipsec.so.24.0 00:02:14.603 [518/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:14.603 [519/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:14.603 [520/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:14.603 [521/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:14.603 [522/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:14.603 [523/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:14.603 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:14.868 [525/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:15.129 [526/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:15.129 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:15.129 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:15.129 [529/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:15.388 [530/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:15.388 [531/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:15.388 [532/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:15.653 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:15.653 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:15.653 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:15.653 [536/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:15.653 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:15.653 [538/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:15.653 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:15.914 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:15.914 [541/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:15.914 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:15.914 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:16.175 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:16.175 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:16.175 [546/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:16.175 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:16.175 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:16.436 [549/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:16.436 [550/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:16.436 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:16.436 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:16.436 [553/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:16.436 [554/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:16.436 [555/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:16.436 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:16.697 [557/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:16.697 [558/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:16.958 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:17.220 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:17.484 [561/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:17.484 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:17.484 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:17.484 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:17.747 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:17.747 [566/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:17.747 [567/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.747 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:17.747 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:17.747 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:17.747 [571/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:17.747 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:18.013 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:18.013 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:18.013 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:18.013 [576/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:18.013 [577/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:18.013 [578/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:18.013 [579/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:18.273 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:18.273 [581/710] Linking target lib/librte_metrics.so.24.0 00:02:18.273 [582/710] Linking target lib/librte_bpf.so.24.0 00:02:18.273 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:18.273 [584/710] Linking target lib/librte_eventdev.so.24.0 00:02:18.538 [585/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:18.538 [586/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:18.539 [587/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:18.539 [588/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:18.539 [589/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:18.539 [590/710] Linking target lib/librte_gro.so.24.0 00:02:18.539 [591/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:18.539 [592/710] Linking target lib/librte_gso.so.24.0 00:02:18.539 [593/710] Linking target lib/librte_bitratestats.so.24.0 00:02:18.539 [594/710] Linking target lib/librte_ip_frag.so.24.0 00:02:18.539 [595/710] Linking target lib/librte_latencystats.so.24.0 00:02:18.539 [596/710] Linking static target lib/librte_pdcp.a 00:02:18.539 [597/710] Linking target lib/librte_pcapng.so.24.0 00:02:18.539 [598/710] Linking target lib/librte_power.so.24.0 00:02:18.539 [599/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:18.539 [600/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:18.800 [601/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:18.800 [602/710] Linking target lib/librte_dispatcher.so.24.0 00:02:18.800 [603/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:18.800 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:18.800 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:18.800 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:18.800 [607/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:18.800 [608/710] Linking target lib/librte_port.so.24.0 00:02:18.800 [609/710] Linking target lib/librte_pdump.so.24.0 00:02:19.062 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:19.062 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:19.062 [612/710] Linking target lib/librte_graph.so.24.0 00:02:19.062 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:19.062 [614/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.062 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:19.062 [616/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:19.062 [617/710] Linking target lib/librte_pdcp.so.24.0 00:02:19.325 [618/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:19.325 [619/710] Linking target lib/librte_table.so.24.0 00:02:19.325 [620/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.325 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:19.325 [622/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:19.325 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:19.325 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:19.325 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:19.325 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:19.588 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:19.588 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:19.588 [629/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:19.588 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:20.158 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:20.158 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:20.158 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:20.158 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:20.158 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:20.158 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:20.158 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:20.158 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:20.158 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:20.416 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:20.416 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:20.416 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:20.416 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:20.676 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:20.676 [645/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:20.676 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:20.935 [647/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:20.935 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:20.935 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:20.935 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:20.935 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:21.194 [652/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:21.194 [653/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:21.452 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:21.452 [655/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:21.452 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:21.452 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:21.711 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:21.711 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:21.711 [660/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:21.711 [661/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:21.711 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:21.711 [663/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:21.711 [664/710] Linking static target drivers/librte_net_i40e.a 00:02:22.277 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:22.277 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:22.277 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.535 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:22.535 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:22.793 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:22.793 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:22.793 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:23.051 [673/710] Linking static target lib/librte_node.a 00:02:23.309 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.309 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:23.309 [676/710] Linking target lib/librte_node.so.24.0 00:02:24.680 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:24.937 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:24.937 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:25.870 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:27.292 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:32.555 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.615 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.615 [684/710] Linking static target lib/librte_vhost.a 00:03:04.615 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.615 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:14.601 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.601 [688/710] Linking static target lib/librte_pipeline.a 00:03:15.168 [689/710] Linking target app/dpdk-dumpcap 00:03:15.168 [690/710] Linking target app/dpdk-test-cmdline 00:03:15.168 [691/710] Linking target app/dpdk-test-sad 00:03:15.168 [692/710] Linking target app/dpdk-test-gpudev 00:03:15.168 [693/710] Linking target app/dpdk-proc-info 00:03:15.428 [694/710] Linking target app/dpdk-pdump 00:03:15.428 [695/710] Linking target app/dpdk-test-security-perf 00:03:15.428 [696/710] Linking target app/dpdk-test-bbdev 00:03:15.428 [697/710] Linking target app/dpdk-test-regex 00:03:15.428 [698/710] Linking target app/dpdk-test-fib 00:03:15.428 [699/710] Linking target app/dpdk-test-dma-perf 00:03:15.428 [700/710] Linking target app/dpdk-graph 00:03:15.428 [701/710] Linking target app/dpdk-test-pipeline 00:03:15.428 [702/710] Linking target app/dpdk-test-flow-perf 00:03:15.428 [703/710] Linking target app/dpdk-test-acl 00:03:15.428 [704/710] Linking target app/dpdk-test-crypto-perf 00:03:15.428 [705/710] Linking target app/dpdk-test-mldev 00:03:15.428 [706/710] Linking target app/dpdk-test-eventdev 00:03:15.428 [707/710] Linking target app/dpdk-test-compress-perf 00:03:15.428 [708/710] Linking target app/dpdk-testpmd 00:03:17.332 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.590 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:17.590 17:51:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:17.590 17:51:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:17.590 17:51:25 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:17.590 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:17.590 [0/1] Installing files. 00:03:17.854 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:17.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.854 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:17.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.860 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:17.861 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:17.861 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.861 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.862 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.439 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:18.440 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:18.440 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:18.440 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.440 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:18.440 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:18.735 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:18.736 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:18.736 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:18.736 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:18.736 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:18.736 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:18.736 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:18.736 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:18.736 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:18.736 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:18.736 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:18.736 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:18.736 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:18.736 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:18.736 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:18.736 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:18.736 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:18.736 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:18.736 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:18.736 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:18.736 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:18.736 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:18.736 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:18.736 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:18.736 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:18.736 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:18.736 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:18.736 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:18.736 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:18.736 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:18.736 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:18.736 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:18.736 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:18.736 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:18.736 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:18.736 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:18.736 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:18.736 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:18.736 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:18.736 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:18.736 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:18.736 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:18.736 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:18.736 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:18.736 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:18.736 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:18.736 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:18.736 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:18.736 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:18.736 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:18.736 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:18.736 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:18.736 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:18.736 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:18.736 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:18.736 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:18.736 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:18.736 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:18.736 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:18.736 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:18.736 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:18.736 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:18.736 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:18.736 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:18.736 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:18.736 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:18.736 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:18.736 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:18.736 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:18.736 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:18.736 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:18.736 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:18.736 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:18.736 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:18.736 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:18.736 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:18.736 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:18.736 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:18.736 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:18.736 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:18.737 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:18.737 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:18.737 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:18.737 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:18.737 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:18.737 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:18.737 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:18.737 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:18.737 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:18.737 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:18.737 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:18.737 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:18.737 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:18.737 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:18.737 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:18.737 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:18.737 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:18.737 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:18.737 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:18.737 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:18.737 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:18.737 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:18.737 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:18.737 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:18.737 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:18.737 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:18.737 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:18.737 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:18.737 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:18.737 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:18.737 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:18.737 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:18.737 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:18.737 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:18.737 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:18.737 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:18.737 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:18.737 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:18.737 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:18.737 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:18.737 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:18.737 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:18.737 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:18.737 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:18.737 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:18.737 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:18.737 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:18.737 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:18.737 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:18.737 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:18.737 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:18.737 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:18.737 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:18.737 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:18.737 17:51:26 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:18.737 17:51:26 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.737 00:03:18.737 real 1m25.781s 00:03:18.737 user 17m55.931s 00:03:18.737 sys 2m5.644s 00:03:18.737 17:51:26 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:18.737 17:51:26 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:18.737 ************************************ 00:03:18.737 END TEST build_native_dpdk 00:03:18.737 ************************************ 00:03:18.737 17:51:26 -- common/autotest_common.sh@1142 -- $ return 0 00:03:18.738 17:51:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:18.738 17:51:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:18.738 17:51:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:18.738 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:18.738 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.738 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.996 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:19.254 Using 'verbs' RDMA provider 00:03:29.790 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:39.767 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:39.767 Creating mk/config.mk...done. 00:03:39.767 Creating mk/cc.flags.mk...done. 00:03:39.767 Type 'make' to build. 00:03:39.767 17:51:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:39.767 17:51:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:39.767 17:51:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:39.767 17:51:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.767 ************************************ 00:03:39.767 START TEST make 00:03:39.767 ************************************ 00:03:39.767 17:51:46 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:39.767 make[1]: Nothing to be done for 'all'. 00:03:40.342 The Meson build system 00:03:40.342 Version: 1.3.1 00:03:40.342 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:40.342 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:40.342 Build type: native build 00:03:40.342 Project name: libvfio-user 00:03:40.342 Project version: 0.0.1 00:03:40.342 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:40.342 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:40.342 Host machine cpu family: x86_64 00:03:40.342 Host machine cpu: x86_64 00:03:40.342 Run-time dependency threads found: YES 00:03:40.342 Library dl found: YES 00:03:40.342 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:40.342 Run-time dependency json-c found: YES 0.17 00:03:40.342 Run-time dependency cmocka found: YES 1.1.7 00:03:40.342 Program pytest-3 found: NO 00:03:40.342 Program flake8 found: NO 00:03:40.342 Program misspell-fixer found: NO 00:03:40.342 Program restructuredtext-lint found: NO 00:03:40.342 Program valgrind found: YES (/usr/bin/valgrind) 00:03:40.342 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.342 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.342 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.342 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:40.342 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:40.342 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:40.342 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:40.342 Build targets in project: 8 00:03:40.342 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:40.342 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:40.342 00:03:40.342 libvfio-user 0.0.1 00:03:40.342 00:03:40.342 User defined options 00:03:40.342 buildtype : debug 00:03:40.342 default_library: shared 00:03:40.342 libdir : /usr/local/lib 00:03:40.342 00:03:40.342 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.305 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:41.305 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:41.305 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:41.305 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:41.564 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:41.564 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:41.564 [6/37] Compiling C object samples/null.p/null.c.o 00:03:41.564 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:41.564 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:41.564 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:41.564 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:41.564 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:41.564 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:41.564 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:41.564 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:41.564 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:41.564 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:41.564 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:41.564 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:41.564 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:41.564 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:41.564 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:41.564 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:41.564 [23/37] Compiling C object samples/client.p/client.c.o 00:03:41.564 [24/37] Compiling C object samples/server.p/server.c.o 00:03:41.564 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:41.826 [26/37] Linking target samples/client 00:03:41.826 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:41.826 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:41.826 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:41.826 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:41.826 [31/37] Linking target test/unit_tests 00:03:42.094 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:42.094 [33/37] Linking target samples/null 00:03:42.094 [34/37] Linking target samples/lspci 00:03:42.094 [35/37] Linking target samples/server 00:03:42.094 [36/37] Linking target samples/gpio-pci-idio-16 00:03:42.094 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:42.094 INFO: autodetecting backend as ninja 00:03:42.095 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:42.358 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:42.935 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:42.935 ninja: no work to do. 00:03:55.130 CC lib/log/log.o 00:03:55.130 CC lib/ut_mock/mock.o 00:03:55.130 CC lib/log/log_flags.o 00:03:55.130 CC lib/log/log_deprecated.o 00:03:55.130 CC lib/ut/ut.o 00:03:55.130 LIB libspdk_log.a 00:03:55.130 LIB libspdk_ut_mock.a 00:03:55.130 LIB libspdk_ut.a 00:03:55.130 SO libspdk_ut_mock.so.6.0 00:03:55.130 SO libspdk_ut.so.2.0 00:03:55.130 SO libspdk_log.so.7.0 00:03:55.130 SYMLINK libspdk_ut.so 00:03:55.130 SYMLINK libspdk_ut_mock.so 00:03:55.130 SYMLINK libspdk_log.so 00:03:55.130 CC lib/ioat/ioat.o 00:03:55.130 CXX lib/trace_parser/trace.o 00:03:55.130 CC lib/util/base64.o 00:03:55.130 CC lib/dma/dma.o 00:03:55.130 CC lib/util/bit_array.o 00:03:55.130 CC lib/util/cpuset.o 00:03:55.130 CC lib/util/crc16.o 00:03:55.130 CC lib/util/crc32.o 00:03:55.130 CC lib/util/crc32c.o 00:03:55.130 CC lib/util/crc32_ieee.o 00:03:55.130 CC lib/util/crc64.o 00:03:55.130 CC lib/util/dif.o 00:03:55.130 CC lib/util/fd.o 00:03:55.130 CC lib/util/fd_group.o 00:03:55.130 CC lib/util/file.o 00:03:55.130 CC lib/util/hexlify.o 00:03:55.130 CC lib/util/iov.o 00:03:55.130 CC lib/util/math.o 00:03:55.130 CC lib/util/net.o 00:03:55.130 CC lib/util/pipe.o 00:03:55.130 CC lib/util/strerror_tls.o 00:03:55.130 CC lib/util/string.o 00:03:55.130 CC lib/util/uuid.o 00:03:55.130 CC lib/util/xor.o 00:03:55.130 CC lib/util/zipf.o 00:03:55.130 CC lib/vfio_user/host/vfio_user_pci.o 00:03:55.130 CC lib/vfio_user/host/vfio_user.o 00:03:55.130 LIB libspdk_dma.a 00:03:55.130 SO libspdk_dma.so.4.0 00:03:55.130 SYMLINK libspdk_dma.so 00:03:55.130 LIB libspdk_ioat.a 00:03:55.130 SO libspdk_ioat.so.7.0 00:03:55.130 LIB libspdk_vfio_user.a 00:03:55.130 SO libspdk_vfio_user.so.5.0 00:03:55.409 SYMLINK libspdk_ioat.so 00:03:55.409 SYMLINK libspdk_vfio_user.so 00:03:55.409 LIB libspdk_util.a 00:03:55.409 SO libspdk_util.so.10.0 00:03:55.680 SYMLINK libspdk_util.so 00:03:55.680 CC lib/rdma_utils/rdma_utils.o 00:03:55.680 CC lib/json/json_parse.o 00:03:55.680 CC lib/conf/conf.o 00:03:55.680 CC lib/vmd/vmd.o 00:03:55.680 CC lib/idxd/idxd.o 00:03:55.680 CC lib/rdma_provider/common.o 00:03:55.680 CC lib/env_dpdk/env.o 00:03:55.680 CC lib/vmd/led.o 00:03:55.680 CC lib/json/json_util.o 00:03:55.680 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:55.680 CC lib/idxd/idxd_user.o 00:03:55.680 CC lib/env_dpdk/memory.o 00:03:55.680 CC lib/json/json_write.o 00:03:55.680 CC lib/idxd/idxd_kernel.o 00:03:55.680 CC lib/env_dpdk/pci.o 00:03:55.680 CC lib/env_dpdk/init.o 00:03:55.680 CC lib/env_dpdk/threads.o 00:03:55.680 CC lib/env_dpdk/pci_ioat.o 00:03:55.680 CC lib/env_dpdk/pci_virtio.o 00:03:55.680 CC lib/env_dpdk/pci_vmd.o 00:03:55.680 CC lib/env_dpdk/pci_idxd.o 00:03:55.680 CC lib/env_dpdk/pci_event.o 00:03:55.680 CC lib/env_dpdk/sigbus_handler.o 00:03:55.680 CC lib/env_dpdk/pci_dpdk.o 00:03:55.680 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:55.680 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:55.680 LIB libspdk_trace_parser.a 00:03:55.937 SO libspdk_trace_parser.so.5.0 00:03:55.937 LIB libspdk_rdma_provider.a 00:03:55.937 SYMLINK libspdk_trace_parser.so 00:03:55.937 SO libspdk_rdma_provider.so.6.0 00:03:55.937 SYMLINK libspdk_rdma_provider.so 00:03:56.194 LIB libspdk_json.a 00:03:56.194 LIB libspdk_conf.a 00:03:56.194 SO libspdk_json.so.6.0 00:03:56.194 LIB libspdk_rdma_utils.a 00:03:56.194 SO libspdk_conf.so.6.0 00:03:56.194 SO libspdk_rdma_utils.so.1.0 00:03:56.194 SYMLINK libspdk_json.so 00:03:56.194 SYMLINK libspdk_conf.so 00:03:56.194 SYMLINK libspdk_rdma_utils.so 00:03:56.194 CC lib/jsonrpc/jsonrpc_server.o 00:03:56.194 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:56.194 CC lib/jsonrpc/jsonrpc_client.o 00:03:56.194 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:56.451 LIB libspdk_idxd.a 00:03:56.451 LIB libspdk_vmd.a 00:03:56.451 SO libspdk_idxd.so.12.0 00:03:56.451 SO libspdk_vmd.so.6.0 00:03:56.451 SYMLINK libspdk_idxd.so 00:03:56.451 SYMLINK libspdk_vmd.so 00:03:56.707 LIB libspdk_jsonrpc.a 00:03:56.707 SO libspdk_jsonrpc.so.6.0 00:03:56.707 SYMLINK libspdk_jsonrpc.so 00:03:56.964 CC lib/rpc/rpc.o 00:03:56.964 LIB libspdk_rpc.a 00:03:57.220 SO libspdk_rpc.so.6.0 00:03:57.220 SYMLINK libspdk_rpc.so 00:03:57.220 CC lib/trace/trace.o 00:03:57.220 CC lib/notify/notify.o 00:03:57.220 CC lib/keyring/keyring.o 00:03:57.220 CC lib/trace/trace_flags.o 00:03:57.220 CC lib/notify/notify_rpc.o 00:03:57.220 CC lib/keyring/keyring_rpc.o 00:03:57.220 CC lib/trace/trace_rpc.o 00:03:57.477 LIB libspdk_notify.a 00:03:57.477 SO libspdk_notify.so.6.0 00:03:57.477 LIB libspdk_keyring.a 00:03:57.477 SYMLINK libspdk_notify.so 00:03:57.477 LIB libspdk_trace.a 00:03:57.477 SO libspdk_keyring.so.1.0 00:03:57.735 SO libspdk_trace.so.10.0 00:03:57.735 SYMLINK libspdk_keyring.so 00:03:57.735 SYMLINK libspdk_trace.so 00:03:57.735 LIB libspdk_env_dpdk.a 00:03:57.735 SO libspdk_env_dpdk.so.15.0 00:03:57.735 CC lib/sock/sock.o 00:03:57.735 CC lib/sock/sock_rpc.o 00:03:57.735 CC lib/thread/thread.o 00:03:57.735 CC lib/thread/iobuf.o 00:03:57.992 SYMLINK libspdk_env_dpdk.so 00:03:58.248 LIB libspdk_sock.a 00:03:58.248 SO libspdk_sock.so.10.0 00:03:58.248 SYMLINK libspdk_sock.so 00:03:58.506 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.506 CC lib/nvme/nvme_ctrlr.o 00:03:58.506 CC lib/nvme/nvme_fabric.o 00:03:58.506 CC lib/nvme/nvme_ns_cmd.o 00:03:58.506 CC lib/nvme/nvme_ns.o 00:03:58.506 CC lib/nvme/nvme_pcie_common.o 00:03:58.506 CC lib/nvme/nvme_pcie.o 00:03:58.506 CC lib/nvme/nvme_qpair.o 00:03:58.506 CC lib/nvme/nvme.o 00:03:58.506 CC lib/nvme/nvme_quirks.o 00:03:58.506 CC lib/nvme/nvme_transport.o 00:03:58.506 CC lib/nvme/nvme_discovery.o 00:03:58.506 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.506 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.506 CC lib/nvme/nvme_tcp.o 00:03:58.506 CC lib/nvme/nvme_opal.o 00:03:58.506 CC lib/nvme/nvme_io_msg.o 00:03:58.506 CC lib/nvme/nvme_poll_group.o 00:03:58.506 CC lib/nvme/nvme_zns.o 00:03:58.506 CC lib/nvme/nvme_stubs.o 00:03:58.506 CC lib/nvme/nvme_auth.o 00:03:58.506 CC lib/nvme/nvme_cuse.o 00:03:58.506 CC lib/nvme/nvme_vfio_user.o 00:03:58.506 CC lib/nvme/nvme_rdma.o 00:03:59.438 LIB libspdk_thread.a 00:03:59.438 SO libspdk_thread.so.10.1 00:03:59.438 SYMLINK libspdk_thread.so 00:03:59.701 CC lib/blob/blobstore.o 00:03:59.701 CC lib/vfu_tgt/tgt_endpoint.o 00:03:59.701 CC lib/blob/request.o 00:03:59.701 CC lib/virtio/virtio.o 00:03:59.701 CC lib/vfu_tgt/tgt_rpc.o 00:03:59.701 CC lib/virtio/virtio_vhost_user.o 00:03:59.701 CC lib/blob/zeroes.o 00:03:59.701 CC lib/virtio/virtio_vfio_user.o 00:03:59.701 CC lib/blob/blob_bs_dev.o 00:03:59.701 CC lib/virtio/virtio_pci.o 00:03:59.701 CC lib/init/json_config.o 00:03:59.701 CC lib/accel/accel.o 00:03:59.701 CC lib/init/subsystem.o 00:03:59.701 CC lib/accel/accel_rpc.o 00:03:59.701 CC lib/init/subsystem_rpc.o 00:03:59.701 CC lib/accel/accel_sw.o 00:03:59.701 CC lib/init/rpc.o 00:03:59.962 LIB libspdk_init.a 00:03:59.962 SO libspdk_init.so.5.0 00:03:59.962 LIB libspdk_vfu_tgt.a 00:03:59.962 LIB libspdk_virtio.a 00:03:59.962 SYMLINK libspdk_init.so 00:03:59.962 SO libspdk_vfu_tgt.so.3.0 00:03:59.962 SO libspdk_virtio.so.7.0 00:03:59.962 SYMLINK libspdk_vfu_tgt.so 00:03:59.962 SYMLINK libspdk_virtio.so 00:04:00.219 CC lib/event/app.o 00:04:00.220 CC lib/event/reactor.o 00:04:00.220 CC lib/event/log_rpc.o 00:04:00.220 CC lib/event/app_rpc.o 00:04:00.220 CC lib/event/scheduler_static.o 00:04:00.477 LIB libspdk_event.a 00:04:00.477 SO libspdk_event.so.14.0 00:04:00.734 SYMLINK libspdk_event.so 00:04:00.734 LIB libspdk_accel.a 00:04:00.734 SO libspdk_accel.so.16.0 00:04:00.734 SYMLINK libspdk_accel.so 00:04:00.992 LIB libspdk_nvme.a 00:04:00.992 CC lib/bdev/bdev.o 00:04:00.992 CC lib/bdev/bdev_rpc.o 00:04:00.992 CC lib/bdev/bdev_zone.o 00:04:00.992 CC lib/bdev/part.o 00:04:00.992 CC lib/bdev/scsi_nvme.o 00:04:00.992 SO libspdk_nvme.so.13.1 00:04:01.249 SYMLINK libspdk_nvme.so 00:04:02.620 LIB libspdk_blob.a 00:04:02.620 SO libspdk_blob.so.11.0 00:04:02.879 SYMLINK libspdk_blob.so 00:04:02.879 CC lib/lvol/lvol.o 00:04:02.879 CC lib/blobfs/blobfs.o 00:04:02.879 CC lib/blobfs/tree.o 00:04:03.445 LIB libspdk_bdev.a 00:04:03.704 SO libspdk_bdev.so.16.0 00:04:03.704 SYMLINK libspdk_bdev.so 00:04:03.704 LIB libspdk_blobfs.a 00:04:03.704 SO libspdk_blobfs.so.10.0 00:04:03.704 SYMLINK libspdk_blobfs.so 00:04:03.972 CC lib/scsi/dev.o 00:04:03.972 CC lib/nvmf/ctrlr.o 00:04:03.972 CC lib/nbd/nbd.o 00:04:03.972 CC lib/ublk/ublk.o 00:04:03.972 CC lib/scsi/lun.o 00:04:03.972 CC lib/ftl/ftl_core.o 00:04:03.972 CC lib/nbd/nbd_rpc.o 00:04:03.972 CC lib/ublk/ublk_rpc.o 00:04:03.972 CC lib/nvmf/ctrlr_discovery.o 00:04:03.972 CC lib/scsi/port.o 00:04:03.972 CC lib/ftl/ftl_init.o 00:04:03.972 CC lib/nvmf/ctrlr_bdev.o 00:04:03.972 CC lib/scsi/scsi.o 00:04:03.972 CC lib/ftl/ftl_layout.o 00:04:03.972 CC lib/nvmf/subsystem.o 00:04:03.972 CC lib/scsi/scsi_bdev.o 00:04:03.972 CC lib/ftl/ftl_debug.o 00:04:03.972 CC lib/nvmf/nvmf.o 00:04:03.972 CC lib/scsi/scsi_pr.o 00:04:03.972 CC lib/ftl/ftl_io.o 00:04:03.972 CC lib/nvmf/nvmf_rpc.o 00:04:03.972 CC lib/ftl/ftl_sb.o 00:04:03.972 CC lib/scsi/scsi_rpc.o 00:04:03.972 CC lib/scsi/task.o 00:04:03.972 CC lib/nvmf/transport.o 00:04:03.972 CC lib/nvmf/tcp.o 00:04:03.972 CC lib/ftl/ftl_l2p.o 00:04:03.972 CC lib/ftl/ftl_l2p_flat.o 00:04:03.972 CC lib/nvmf/stubs.o 00:04:03.972 CC lib/ftl/ftl_band.o 00:04:03.972 CC lib/ftl/ftl_nv_cache.o 00:04:03.972 CC lib/nvmf/mdns_server.o 00:04:03.972 CC lib/nvmf/vfio_user.o 00:04:03.972 CC lib/nvmf/rdma.o 00:04:03.972 CC lib/ftl/ftl_band_ops.o 00:04:03.972 CC lib/nvmf/auth.o 00:04:03.972 CC lib/ftl/ftl_writer.o 00:04:03.972 CC lib/ftl/ftl_rq.o 00:04:03.972 CC lib/ftl/ftl_reloc.o 00:04:03.972 CC lib/ftl/ftl_l2p_cache.o 00:04:03.972 CC lib/ftl/ftl_p2l.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:03.972 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:03.972 LIB libspdk_lvol.a 00:04:03.972 SO libspdk_lvol.so.10.0 00:04:04.231 SYMLINK libspdk_lvol.so 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:04.231 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.231 CC lib/ftl/utils/ftl_conf.o 00:04:04.231 CC lib/ftl/utils/ftl_md.o 00:04:04.231 CC lib/ftl/utils/ftl_mempool.o 00:04:04.231 CC lib/ftl/utils/ftl_bitmap.o 00:04:04.231 CC lib/ftl/utils/ftl_property.o 00:04:04.231 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.231 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.231 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.489 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.489 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.489 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:04.489 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:04.489 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:04.489 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:04.489 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:04.489 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:04.489 CC lib/ftl/base/ftl_base_dev.o 00:04:04.489 CC lib/ftl/base/ftl_base_bdev.o 00:04:04.489 CC lib/ftl/ftl_trace.o 00:04:04.748 LIB libspdk_nbd.a 00:04:04.748 SO libspdk_nbd.so.7.0 00:04:04.748 LIB libspdk_scsi.a 00:04:04.748 SYMLINK libspdk_nbd.so 00:04:04.748 SO libspdk_scsi.so.9.0 00:04:05.006 LIB libspdk_ublk.a 00:04:05.006 SYMLINK libspdk_scsi.so 00:04:05.006 SO libspdk_ublk.so.3.0 00:04:05.006 SYMLINK libspdk_ublk.so 00:04:05.006 CC lib/vhost/vhost.o 00:04:05.006 CC lib/iscsi/conn.o 00:04:05.006 CC lib/vhost/vhost_rpc.o 00:04:05.006 CC lib/iscsi/init_grp.o 00:04:05.006 CC lib/vhost/vhost_scsi.o 00:04:05.006 CC lib/iscsi/iscsi.o 00:04:05.006 CC lib/vhost/vhost_blk.o 00:04:05.006 CC lib/iscsi/md5.o 00:04:05.006 CC lib/vhost/rte_vhost_user.o 00:04:05.006 CC lib/iscsi/param.o 00:04:05.006 CC lib/iscsi/portal_grp.o 00:04:05.006 CC lib/iscsi/tgt_node.o 00:04:05.006 CC lib/iscsi/iscsi_subsystem.o 00:04:05.006 CC lib/iscsi/iscsi_rpc.o 00:04:05.006 CC lib/iscsi/task.o 00:04:05.265 LIB libspdk_ftl.a 00:04:05.522 SO libspdk_ftl.so.9.0 00:04:05.780 SYMLINK libspdk_ftl.so 00:04:06.344 LIB libspdk_vhost.a 00:04:06.344 SO libspdk_vhost.so.8.0 00:04:06.344 LIB libspdk_nvmf.a 00:04:06.344 SYMLINK libspdk_vhost.so 00:04:06.603 SO libspdk_nvmf.so.19.0 00:04:06.603 LIB libspdk_iscsi.a 00:04:06.603 SO libspdk_iscsi.so.8.0 00:04:06.603 SYMLINK libspdk_nvmf.so 00:04:06.861 SYMLINK libspdk_iscsi.so 00:04:07.119 CC module/vfu_device/vfu_virtio.o 00:04:07.119 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.119 CC module/vfu_device/vfu_virtio_blk.o 00:04:07.119 CC module/vfu_device/vfu_virtio_scsi.o 00:04:07.119 CC module/vfu_device/vfu_virtio_rpc.o 00:04:07.119 CC module/accel/ioat/accel_ioat.o 00:04:07.119 CC module/blob/bdev/blob_bdev.o 00:04:07.119 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.119 CC module/accel/error/accel_error.o 00:04:07.119 CC module/sock/posix/posix.o 00:04:07.119 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.119 CC module/accel/dsa/accel_dsa.o 00:04:07.119 CC module/accel/error/accel_error_rpc.o 00:04:07.119 CC module/accel/iaa/accel_iaa.o 00:04:07.119 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.119 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.119 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.119 CC module/scheduler/gscheduler/gscheduler.o 00:04:07.119 CC module/keyring/linux/keyring.o 00:04:07.119 CC module/keyring/linux/keyring_rpc.o 00:04:07.119 CC module/keyring/file/keyring.o 00:04:07.119 CC module/keyring/file/keyring_rpc.o 00:04:07.119 LIB libspdk_env_dpdk_rpc.a 00:04:07.119 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.377 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.377 LIB libspdk_keyring_file.a 00:04:07.377 LIB libspdk_keyring_linux.a 00:04:07.377 LIB libspdk_scheduler_gscheduler.a 00:04:07.377 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.377 SO libspdk_keyring_file.so.1.0 00:04:07.377 SO libspdk_keyring_linux.so.1.0 00:04:07.377 SO libspdk_scheduler_gscheduler.so.4.0 00:04:07.377 LIB libspdk_accel_error.a 00:04:07.377 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:07.377 LIB libspdk_accel_ioat.a 00:04:07.377 LIB libspdk_scheduler_dynamic.a 00:04:07.377 SO libspdk_accel_error.so.2.0 00:04:07.377 LIB libspdk_accel_iaa.a 00:04:07.377 SO libspdk_accel_ioat.so.6.0 00:04:07.377 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.377 SYMLINK libspdk_keyring_file.so 00:04:07.377 SYMLINK libspdk_keyring_linux.so 00:04:07.377 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.377 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.377 SO libspdk_accel_iaa.so.3.0 00:04:07.377 SYMLINK libspdk_accel_error.so 00:04:07.377 LIB libspdk_accel_dsa.a 00:04:07.377 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.377 SYMLINK libspdk_accel_ioat.so 00:04:07.377 LIB libspdk_blob_bdev.a 00:04:07.377 SO libspdk_accel_dsa.so.5.0 00:04:07.377 SYMLINK libspdk_accel_iaa.so 00:04:07.377 SO libspdk_blob_bdev.so.11.0 00:04:07.377 SYMLINK libspdk_blob_bdev.so 00:04:07.377 SYMLINK libspdk_accel_dsa.so 00:04:07.637 LIB libspdk_vfu_device.a 00:04:07.637 SO libspdk_vfu_device.so.3.0 00:04:07.637 CC module/bdev/delay/vbdev_delay.o 00:04:07.637 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.637 CC module/blobfs/bdev/blobfs_bdev.o 00:04:07.637 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:07.637 CC module/bdev/nvme/bdev_nvme.o 00:04:07.637 CC module/bdev/malloc/bdev_malloc.o 00:04:07.637 CC module/bdev/null/bdev_null.o 00:04:07.637 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:07.637 CC module/bdev/error/vbdev_error.o 00:04:07.637 CC module/bdev/null/bdev_null_rpc.o 00:04:07.637 CC module/bdev/nvme/nvme_rpc.o 00:04:07.637 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.637 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.637 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.637 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.637 CC module/bdev/nvme/bdev_mdns_client.o 00:04:07.637 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:07.637 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.637 CC module/bdev/gpt/gpt.o 00:04:07.637 CC module/bdev/split/vbdev_split.o 00:04:07.637 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.637 CC module/bdev/nvme/vbdev_opal.o 00:04:07.637 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.637 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:07.637 CC module/bdev/ftl/bdev_ftl.o 00:04:07.637 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:07.637 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.637 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.637 CC module/bdev/iscsi/bdev_iscsi.o 00:04:07.637 CC module/bdev/aio/bdev_aio.o 00:04:07.637 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:07.637 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.637 CC module/bdev/aio/bdev_aio_rpc.o 00:04:07.637 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:07.637 CC module/bdev/raid/bdev_raid.o 00:04:07.637 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:07.637 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.637 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:07.637 CC module/bdev/raid/bdev_raid_sb.o 00:04:07.637 CC module/bdev/raid/raid0.o 00:04:07.637 CC module/bdev/raid/raid1.o 00:04:07.637 CC module/bdev/raid/concat.o 00:04:07.896 SYMLINK libspdk_vfu_device.so 00:04:07.896 LIB libspdk_sock_posix.a 00:04:08.155 SO libspdk_sock_posix.so.6.0 00:04:08.155 LIB libspdk_blobfs_bdev.a 00:04:08.155 LIB libspdk_bdev_split.a 00:04:08.155 SO libspdk_blobfs_bdev.so.6.0 00:04:08.155 SYMLINK libspdk_sock_posix.so 00:04:08.155 SO libspdk_bdev_split.so.6.0 00:04:08.155 LIB libspdk_bdev_null.a 00:04:08.155 SO libspdk_bdev_null.so.6.0 00:04:08.155 SYMLINK libspdk_blobfs_bdev.so 00:04:08.155 LIB libspdk_bdev_passthru.a 00:04:08.155 LIB libspdk_bdev_error.a 00:04:08.155 SYMLINK libspdk_bdev_split.so 00:04:08.155 LIB libspdk_bdev_gpt.a 00:04:08.155 LIB libspdk_bdev_aio.a 00:04:08.155 LIB libspdk_bdev_zone_block.a 00:04:08.155 SO libspdk_bdev_error.so.6.0 00:04:08.155 SO libspdk_bdev_passthru.so.6.0 00:04:08.155 LIB libspdk_bdev_ftl.a 00:04:08.155 SYMLINK libspdk_bdev_null.so 00:04:08.155 SO libspdk_bdev_gpt.so.6.0 00:04:08.413 SO libspdk_bdev_aio.so.6.0 00:04:08.413 SO libspdk_bdev_zone_block.so.6.0 00:04:08.413 SO libspdk_bdev_ftl.so.6.0 00:04:08.413 LIB libspdk_bdev_malloc.a 00:04:08.413 LIB libspdk_bdev_delay.a 00:04:08.413 LIB libspdk_bdev_iscsi.a 00:04:08.413 SYMLINK libspdk_bdev_passthru.so 00:04:08.413 SYMLINK libspdk_bdev_error.so 00:04:08.413 SO libspdk_bdev_malloc.so.6.0 00:04:08.413 SYMLINK libspdk_bdev_gpt.so 00:04:08.413 SO libspdk_bdev_delay.so.6.0 00:04:08.413 SYMLINK libspdk_bdev_zone_block.so 00:04:08.413 SYMLINK libspdk_bdev_aio.so 00:04:08.413 SO libspdk_bdev_iscsi.so.6.0 00:04:08.413 SYMLINK libspdk_bdev_ftl.so 00:04:08.413 SYMLINK libspdk_bdev_malloc.so 00:04:08.413 SYMLINK libspdk_bdev_delay.so 00:04:08.413 SYMLINK libspdk_bdev_iscsi.so 00:04:08.413 LIB libspdk_bdev_virtio.a 00:04:08.413 SO libspdk_bdev_virtio.so.6.0 00:04:08.413 LIB libspdk_bdev_lvol.a 00:04:08.413 SO libspdk_bdev_lvol.so.6.0 00:04:08.672 SYMLINK libspdk_bdev_virtio.so 00:04:08.672 SYMLINK libspdk_bdev_lvol.so 00:04:08.935 LIB libspdk_bdev_raid.a 00:04:08.935 SO libspdk_bdev_raid.so.6.0 00:04:08.935 SYMLINK libspdk_bdev_raid.so 00:04:09.920 LIB libspdk_bdev_nvme.a 00:04:10.178 SO libspdk_bdev_nvme.so.7.0 00:04:10.178 SYMLINK libspdk_bdev_nvme.so 00:04:10.436 CC module/event/subsystems/sock/sock.o 00:04:10.436 CC module/event/subsystems/keyring/keyring.o 00:04:10.436 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.436 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.436 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:10.436 CC module/event/subsystems/vmd/vmd.o 00:04:10.436 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.436 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.436 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:10.695 LIB libspdk_event_keyring.a 00:04:10.695 LIB libspdk_event_vhost_blk.a 00:04:10.695 LIB libspdk_event_scheduler.a 00:04:10.695 LIB libspdk_event_vfu_tgt.a 00:04:10.695 LIB libspdk_event_vmd.a 00:04:10.695 LIB libspdk_event_sock.a 00:04:10.695 SO libspdk_event_keyring.so.1.0 00:04:10.695 SO libspdk_event_vhost_blk.so.3.0 00:04:10.695 SO libspdk_event_scheduler.so.4.0 00:04:10.695 SO libspdk_event_vfu_tgt.so.3.0 00:04:10.695 LIB libspdk_event_iobuf.a 00:04:10.695 SO libspdk_event_sock.so.5.0 00:04:10.695 SO libspdk_event_vmd.so.6.0 00:04:10.695 SO libspdk_event_iobuf.so.3.0 00:04:10.695 SYMLINK libspdk_event_keyring.so 00:04:10.695 SYMLINK libspdk_event_vhost_blk.so 00:04:10.695 SYMLINK libspdk_event_scheduler.so 00:04:10.695 SYMLINK libspdk_event_sock.so 00:04:10.695 SYMLINK libspdk_event_vfu_tgt.so 00:04:10.695 SYMLINK libspdk_event_vmd.so 00:04:10.695 SYMLINK libspdk_event_iobuf.so 00:04:10.952 CC module/event/subsystems/accel/accel.o 00:04:11.210 LIB libspdk_event_accel.a 00:04:11.210 SO libspdk_event_accel.so.6.0 00:04:11.210 SYMLINK libspdk_event_accel.so 00:04:11.469 CC module/event/subsystems/bdev/bdev.o 00:04:11.469 LIB libspdk_event_bdev.a 00:04:11.469 SO libspdk_event_bdev.so.6.0 00:04:11.727 SYMLINK libspdk_event_bdev.so 00:04:11.727 CC module/event/subsystems/scsi/scsi.o 00:04:11.727 CC module/event/subsystems/nbd/nbd.o 00:04:11.727 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:11.727 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:11.727 CC module/event/subsystems/ublk/ublk.o 00:04:11.985 LIB libspdk_event_nbd.a 00:04:11.985 LIB libspdk_event_scsi.a 00:04:11.985 SO libspdk_event_nbd.so.6.0 00:04:11.985 SO libspdk_event_scsi.so.6.0 00:04:11.985 LIB libspdk_event_ublk.a 00:04:11.985 SO libspdk_event_ublk.so.3.0 00:04:11.985 SYMLINK libspdk_event_nbd.so 00:04:11.985 SYMLINK libspdk_event_scsi.so 00:04:11.985 LIB libspdk_event_nvmf.a 00:04:11.985 SYMLINK libspdk_event_ublk.so 00:04:11.985 SO libspdk_event_nvmf.so.6.0 00:04:12.244 SYMLINK libspdk_event_nvmf.so 00:04:12.244 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:12.244 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.244 LIB libspdk_event_vhost_scsi.a 00:04:12.502 SO libspdk_event_vhost_scsi.so.3.0 00:04:12.502 LIB libspdk_event_iscsi.a 00:04:12.502 SO libspdk_event_iscsi.so.6.0 00:04:12.502 SYMLINK libspdk_event_vhost_scsi.so 00:04:12.502 SYMLINK libspdk_event_iscsi.so 00:04:12.502 SO libspdk.so.6.0 00:04:12.502 SYMLINK libspdk.so 00:04:12.766 CXX app/trace/trace.o 00:04:12.766 CC app/trace_record/trace_record.o 00:04:12.766 CC app/spdk_top/spdk_top.o 00:04:12.766 CC app/spdk_nvme_identify/identify.o 00:04:12.766 CC app/spdk_nvme_perf/perf.o 00:04:12.766 CC test/rpc_client/rpc_client_test.o 00:04:12.766 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.766 CC app/spdk_lspci/spdk_lspci.o 00:04:12.766 TEST_HEADER include/spdk/accel.h 00:04:12.766 TEST_HEADER include/spdk/accel_module.h 00:04:12.766 TEST_HEADER include/spdk/assert.h 00:04:12.766 TEST_HEADER include/spdk/barrier.h 00:04:12.766 TEST_HEADER include/spdk/base64.h 00:04:12.766 TEST_HEADER include/spdk/bdev.h 00:04:12.766 TEST_HEADER include/spdk/bdev_module.h 00:04:12.766 TEST_HEADER include/spdk/bdev_zone.h 00:04:12.766 TEST_HEADER include/spdk/bit_array.h 00:04:12.766 TEST_HEADER include/spdk/bit_pool.h 00:04:12.766 TEST_HEADER include/spdk/blob_bdev.h 00:04:12.766 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:12.766 TEST_HEADER include/spdk/blobfs.h 00:04:12.766 TEST_HEADER include/spdk/blob.h 00:04:12.766 TEST_HEADER include/spdk/conf.h 00:04:12.766 TEST_HEADER include/spdk/config.h 00:04:12.766 TEST_HEADER include/spdk/cpuset.h 00:04:12.766 TEST_HEADER include/spdk/crc16.h 00:04:12.767 TEST_HEADER include/spdk/crc32.h 00:04:12.767 TEST_HEADER include/spdk/crc64.h 00:04:12.767 TEST_HEADER include/spdk/dif.h 00:04:12.767 TEST_HEADER include/spdk/dma.h 00:04:12.767 TEST_HEADER include/spdk/endian.h 00:04:12.767 TEST_HEADER include/spdk/env_dpdk.h 00:04:12.767 TEST_HEADER include/spdk/env.h 00:04:12.767 TEST_HEADER include/spdk/event.h 00:04:12.767 TEST_HEADER include/spdk/fd_group.h 00:04:12.767 TEST_HEADER include/spdk/fd.h 00:04:12.767 TEST_HEADER include/spdk/file.h 00:04:12.767 TEST_HEADER include/spdk/ftl.h 00:04:12.767 TEST_HEADER include/spdk/gpt_spec.h 00:04:12.767 TEST_HEADER include/spdk/hexlify.h 00:04:12.767 TEST_HEADER include/spdk/histogram_data.h 00:04:12.767 TEST_HEADER include/spdk/idxd.h 00:04:12.767 TEST_HEADER include/spdk/idxd_spec.h 00:04:12.767 TEST_HEADER include/spdk/init.h 00:04:12.767 TEST_HEADER include/spdk/ioat.h 00:04:12.767 TEST_HEADER include/spdk/ioat_spec.h 00:04:12.767 TEST_HEADER include/spdk/iscsi_spec.h 00:04:12.767 TEST_HEADER include/spdk/json.h 00:04:12.767 TEST_HEADER include/spdk/jsonrpc.h 00:04:12.767 TEST_HEADER include/spdk/keyring.h 00:04:12.767 TEST_HEADER include/spdk/keyring_module.h 00:04:12.767 TEST_HEADER include/spdk/likely.h 00:04:12.767 TEST_HEADER include/spdk/log.h 00:04:12.767 TEST_HEADER include/spdk/lvol.h 00:04:12.767 TEST_HEADER include/spdk/memory.h 00:04:12.767 TEST_HEADER include/spdk/nbd.h 00:04:12.767 TEST_HEADER include/spdk/mmio.h 00:04:12.767 TEST_HEADER include/spdk/net.h 00:04:12.767 TEST_HEADER include/spdk/notify.h 00:04:12.767 TEST_HEADER include/spdk/nvme.h 00:04:12.767 TEST_HEADER include/spdk/nvme_intel.h 00:04:12.767 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:12.767 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:12.767 TEST_HEADER include/spdk/nvme_spec.h 00:04:12.767 TEST_HEADER include/spdk/nvme_zns.h 00:04:12.767 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:12.767 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:12.767 TEST_HEADER include/spdk/nvmf.h 00:04:12.767 TEST_HEADER include/spdk/nvmf_spec.h 00:04:12.767 TEST_HEADER include/spdk/nvmf_transport.h 00:04:12.767 TEST_HEADER include/spdk/opal.h 00:04:12.767 TEST_HEADER include/spdk/pci_ids.h 00:04:12.767 TEST_HEADER include/spdk/opal_spec.h 00:04:12.767 TEST_HEADER include/spdk/pipe.h 00:04:12.767 TEST_HEADER include/spdk/queue.h 00:04:12.767 TEST_HEADER include/spdk/reduce.h 00:04:12.767 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:12.767 TEST_HEADER include/spdk/rpc.h 00:04:12.767 TEST_HEADER include/spdk/scheduler.h 00:04:12.767 TEST_HEADER include/spdk/scsi.h 00:04:12.767 TEST_HEADER include/spdk/scsi_spec.h 00:04:12.767 TEST_HEADER include/spdk/sock.h 00:04:12.767 TEST_HEADER include/spdk/stdinc.h 00:04:12.767 TEST_HEADER include/spdk/string.h 00:04:12.767 TEST_HEADER include/spdk/thread.h 00:04:12.767 TEST_HEADER include/spdk/trace.h 00:04:12.767 TEST_HEADER include/spdk/trace_parser.h 00:04:12.767 TEST_HEADER include/spdk/tree.h 00:04:12.767 CC app/spdk_dd/spdk_dd.o 00:04:12.767 TEST_HEADER include/spdk/ublk.h 00:04:12.767 TEST_HEADER include/spdk/util.h 00:04:12.767 TEST_HEADER include/spdk/uuid.h 00:04:12.767 TEST_HEADER include/spdk/version.h 00:04:12.767 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:12.767 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:12.767 TEST_HEADER include/spdk/vhost.h 00:04:12.767 TEST_HEADER include/spdk/vmd.h 00:04:12.767 TEST_HEADER include/spdk/zipf.h 00:04:12.767 TEST_HEADER include/spdk/xor.h 00:04:12.767 CXX test/cpp_headers/accel.o 00:04:12.767 CXX test/cpp_headers/accel_module.o 00:04:12.767 CXX test/cpp_headers/assert.o 00:04:12.767 CXX test/cpp_headers/barrier.o 00:04:12.767 CXX test/cpp_headers/base64.o 00:04:12.767 CXX test/cpp_headers/bdev.o 00:04:12.767 CC app/iscsi_tgt/iscsi_tgt.o 00:04:12.767 CXX test/cpp_headers/bdev_module.o 00:04:12.767 CXX test/cpp_headers/bdev_zone.o 00:04:12.767 CXX test/cpp_headers/bit_array.o 00:04:12.767 CXX test/cpp_headers/bit_pool.o 00:04:12.767 CXX test/cpp_headers/blob_bdev.o 00:04:12.767 CXX test/cpp_headers/blobfs_bdev.o 00:04:12.767 CXX test/cpp_headers/blobfs.o 00:04:12.767 CXX test/cpp_headers/blob.o 00:04:12.767 CXX test/cpp_headers/conf.o 00:04:12.767 CXX test/cpp_headers/config.o 00:04:12.767 CXX test/cpp_headers/cpuset.o 00:04:12.767 CXX test/cpp_headers/crc16.o 00:04:12.767 CC app/nvmf_tgt/nvmf_main.o 00:04:12.767 CXX test/cpp_headers/crc32.o 00:04:12.767 CC examples/ioat/verify/verify.o 00:04:12.767 CC examples/util/zipf/zipf.o 00:04:12.767 CC app/spdk_tgt/spdk_tgt.o 00:04:12.767 CC examples/ioat/perf/perf.o 00:04:12.767 CC app/fio/nvme/fio_plugin.o 00:04:12.767 CC test/env/memory/memory_ut.o 00:04:13.026 CC test/env/vtophys/vtophys.o 00:04:13.026 CC test/thread/poller_perf/poller_perf.o 00:04:13.026 CC test/app/jsoncat/jsoncat.o 00:04:13.026 CC test/env/pci/pci_ut.o 00:04:13.026 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:13.026 CC test/app/histogram_perf/histogram_perf.o 00:04:13.026 CC test/app/stub/stub.o 00:04:13.026 CC test/dma/test_dma/test_dma.o 00:04:13.026 CC test/app/bdev_svc/bdev_svc.o 00:04:13.026 CC app/fio/bdev/fio_plugin.o 00:04:13.026 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.026 LINK spdk_lspci 00:04:13.026 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.289 LINK rpc_client_test 00:04:13.289 LINK spdk_nvme_discover 00:04:13.289 LINK jsoncat 00:04:13.289 LINK interrupt_tgt 00:04:13.289 LINK zipf 00:04:13.289 CXX test/cpp_headers/crc64.o 00:04:13.289 LINK vtophys 00:04:13.289 LINK poller_perf 00:04:13.289 LINK histogram_perf 00:04:13.289 LINK spdk_trace_record 00:04:13.289 CXX test/cpp_headers/dif.o 00:04:13.289 CXX test/cpp_headers/dma.o 00:04:13.289 CXX test/cpp_headers/endian.o 00:04:13.289 CXX test/cpp_headers/env_dpdk.o 00:04:13.289 LINK nvmf_tgt 00:04:13.289 LINK env_dpdk_post_init 00:04:13.289 CXX test/cpp_headers/env.o 00:04:13.289 CXX test/cpp_headers/event.o 00:04:13.289 CXX test/cpp_headers/fd_group.o 00:04:13.289 CXX test/cpp_headers/fd.o 00:04:13.289 CXX test/cpp_headers/file.o 00:04:13.289 CXX test/cpp_headers/ftl.o 00:04:13.289 CXX test/cpp_headers/gpt_spec.o 00:04:13.289 CXX test/cpp_headers/hexlify.o 00:04:13.289 CXX test/cpp_headers/histogram_data.o 00:04:13.289 LINK iscsi_tgt 00:04:13.289 LINK stub 00:04:13.289 CXX test/cpp_headers/idxd.o 00:04:13.289 CXX test/cpp_headers/idxd_spec.o 00:04:13.289 LINK ioat_perf 00:04:13.289 LINK verify 00:04:13.289 LINK spdk_tgt 00:04:13.289 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.289 LINK bdev_svc 00:04:13.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:13.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:13.551 CXX test/cpp_headers/init.o 00:04:13.551 CXX test/cpp_headers/ioat.o 00:04:13.551 CXX test/cpp_headers/ioat_spec.o 00:04:13.551 CXX test/cpp_headers/iscsi_spec.o 00:04:13.551 CXX test/cpp_headers/json.o 00:04:13.551 LINK spdk_trace 00:04:13.551 CXX test/cpp_headers/jsonrpc.o 00:04:13.551 LINK spdk_dd 00:04:13.551 CXX test/cpp_headers/keyring.o 00:04:13.551 CXX test/cpp_headers/keyring_module.o 00:04:13.551 CXX test/cpp_headers/likely.o 00:04:13.551 CXX test/cpp_headers/log.o 00:04:13.551 CXX test/cpp_headers/lvol.o 00:04:13.551 CXX test/cpp_headers/memory.o 00:04:13.551 CXX test/cpp_headers/mmio.o 00:04:13.551 CXX test/cpp_headers/nbd.o 00:04:13.815 CXX test/cpp_headers/net.o 00:04:13.815 LINK pci_ut 00:04:13.815 CXX test/cpp_headers/notify.o 00:04:13.815 CXX test/cpp_headers/nvme.o 00:04:13.815 CXX test/cpp_headers/nvme_intel.o 00:04:13.815 CXX test/cpp_headers/nvme_ocssd.o 00:04:13.815 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:13.815 CXX test/cpp_headers/nvme_spec.o 00:04:13.815 CXX test/cpp_headers/nvme_zns.o 00:04:13.815 CXX test/cpp_headers/nvmf_cmd.o 00:04:13.815 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:13.815 CXX test/cpp_headers/nvmf.o 00:04:13.815 CXX test/cpp_headers/nvmf_spec.o 00:04:13.815 LINK test_dma 00:04:13.815 CXX test/cpp_headers/nvmf_transport.o 00:04:13.815 CXX test/cpp_headers/opal.o 00:04:13.815 CC examples/sock/hello_world/hello_sock.o 00:04:13.815 LINK nvme_fuzz 00:04:13.815 CC test/event/event_perf/event_perf.o 00:04:13.815 CXX test/cpp_headers/opal_spec.o 00:04:13.815 CXX test/cpp_headers/pci_ids.o 00:04:14.076 CC examples/vmd/lsvmd/lsvmd.o 00:04:14.076 CC examples/vmd/led/led.o 00:04:14.076 CC examples/thread/thread/thread_ex.o 00:04:14.077 CC examples/idxd/perf/perf.o 00:04:14.077 LINK spdk_nvme 00:04:14.077 CXX test/cpp_headers/pipe.o 00:04:14.077 CC test/event/reactor/reactor.o 00:04:14.077 CXX test/cpp_headers/queue.o 00:04:14.077 CC test/event/reactor_perf/reactor_perf.o 00:04:14.077 CXX test/cpp_headers/reduce.o 00:04:14.077 LINK spdk_bdev 00:04:14.077 CXX test/cpp_headers/rpc.o 00:04:14.077 CXX test/cpp_headers/scheduler.o 00:04:14.077 CXX test/cpp_headers/scsi.o 00:04:14.077 CXX test/cpp_headers/scsi_spec.o 00:04:14.077 CXX test/cpp_headers/sock.o 00:04:14.077 CXX test/cpp_headers/stdinc.o 00:04:14.077 CXX test/cpp_headers/string.o 00:04:14.077 CC test/event/app_repeat/app_repeat.o 00:04:14.077 CXX test/cpp_headers/thread.o 00:04:14.077 CXX test/cpp_headers/trace.o 00:04:14.077 CXX test/cpp_headers/trace_parser.o 00:04:14.077 CC test/event/scheduler/scheduler.o 00:04:14.077 CXX test/cpp_headers/tree.o 00:04:14.077 CXX test/cpp_headers/ublk.o 00:04:14.077 CXX test/cpp_headers/util.o 00:04:14.077 CXX test/cpp_headers/uuid.o 00:04:14.077 CXX test/cpp_headers/version.o 00:04:14.077 CC app/vhost/vhost.o 00:04:14.077 CXX test/cpp_headers/vfio_user_pci.o 00:04:14.340 CXX test/cpp_headers/vfio_user_spec.o 00:04:14.340 CXX test/cpp_headers/vhost.o 00:04:14.340 CXX test/cpp_headers/vmd.o 00:04:14.340 CXX test/cpp_headers/xor.o 00:04:14.340 CXX test/cpp_headers/zipf.o 00:04:14.340 LINK lsvmd 00:04:14.340 LINK event_perf 00:04:14.340 LINK vhost_fuzz 00:04:14.340 LINK mem_callbacks 00:04:14.340 LINK led 00:04:14.340 LINK spdk_nvme_perf 00:04:14.340 LINK reactor 00:04:14.340 LINK spdk_nvme_identify 00:04:14.340 LINK reactor_perf 00:04:14.340 LINK spdk_top 00:04:14.340 LINK hello_sock 00:04:14.340 LINK app_repeat 00:04:14.600 LINK thread 00:04:14.600 CC test/nvme/reset/reset.o 00:04:14.600 CC test/nvme/aer/aer.o 00:04:14.600 CC test/nvme/sgl/sgl.o 00:04:14.600 CC test/nvme/e2edp/nvme_dp.o 00:04:14.600 CC test/nvme/overhead/overhead.o 00:04:14.600 CC test/nvme/startup/startup.o 00:04:14.600 CC test/nvme/err_injection/err_injection.o 00:04:14.600 CC test/accel/dif/dif.o 00:04:14.600 CC test/nvme/reserve/reserve.o 00:04:14.600 CC test/blobfs/mkfs/mkfs.o 00:04:14.600 LINK vhost 00:04:14.600 CC test/nvme/connect_stress/connect_stress.o 00:04:14.600 CC test/nvme/simple_copy/simple_copy.o 00:04:14.600 CC test/nvme/compliance/nvme_compliance.o 00:04:14.600 CC test/nvme/boot_partition/boot_partition.o 00:04:14.600 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:14.600 CC test/nvme/fused_ordering/fused_ordering.o 00:04:14.600 CC test/nvme/fdp/fdp.o 00:04:14.600 LINK scheduler 00:04:14.600 CC test/lvol/esnap/esnap.o 00:04:14.600 CC test/nvme/cuse/cuse.o 00:04:14.600 LINK idxd_perf 00:04:14.859 LINK startup 00:04:14.859 LINK boot_partition 00:04:14.859 LINK connect_stress 00:04:14.859 LINK mkfs 00:04:14.859 LINK err_injection 00:04:14.859 LINK simple_copy 00:04:14.859 CC examples/nvme/reconnect/reconnect.o 00:04:14.859 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:14.859 CC examples/nvme/hotplug/hotplug.o 00:04:14.859 CC examples/nvme/hello_world/hello_world.o 00:04:14.859 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:14.859 CC examples/nvme/abort/abort.o 00:04:14.859 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:14.859 CC examples/nvme/arbitration/arbitration.o 00:04:14.859 LINK overhead 00:04:14.859 LINK aer 00:04:14.859 LINK reserve 00:04:14.859 LINK memory_ut 00:04:14.859 LINK doorbell_aers 00:04:14.859 LINK sgl 00:04:14.859 CC examples/accel/perf/accel_perf.o 00:04:15.117 LINK fused_ordering 00:04:15.117 LINK nvme_dp 00:04:15.117 LINK reset 00:04:15.117 LINK fdp 00:04:15.117 CC examples/blob/hello_world/hello_blob.o 00:04:15.117 CC examples/blob/cli/blobcli.o 00:04:15.117 LINK nvme_compliance 00:04:15.117 LINK pmr_persistence 00:04:15.117 LINK dif 00:04:15.117 LINK cmb_copy 00:04:15.117 LINK hotplug 00:04:15.375 LINK hello_world 00:04:15.375 LINK arbitration 00:04:15.375 LINK abort 00:04:15.375 LINK hello_blob 00:04:15.375 LINK reconnect 00:04:15.375 LINK nvme_manage 00:04:15.634 LINK accel_perf 00:04:15.634 LINK blobcli 00:04:15.634 CC test/bdev/bdevio/bdevio.o 00:04:15.634 LINK iscsi_fuzz 00:04:15.905 CC examples/bdev/hello_world/hello_bdev.o 00:04:15.905 CC examples/bdev/bdevperf/bdevperf.o 00:04:15.905 LINK bdevio 00:04:16.163 LINK hello_bdev 00:04:16.163 LINK cuse 00:04:16.728 LINK bdevperf 00:04:16.986 CC examples/nvmf/nvmf/nvmf.o 00:04:17.244 LINK nvmf 00:04:19.773 LINK esnap 00:04:20.032 00:04:20.032 real 0m41.288s 00:04:20.032 user 7m25.602s 00:04:20.032 sys 1m47.869s 00:04:20.032 17:52:27 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:20.032 17:52:27 make -- common/autotest_common.sh@10 -- $ set +x 00:04:20.032 ************************************ 00:04:20.032 END TEST make 00:04:20.032 ************************************ 00:04:20.032 17:52:27 -- common/autotest_common.sh@1142 -- $ return 0 00:04:20.032 17:52:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:20.032 17:52:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:20.032 17:52:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:20.032 17:52:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.032 17:52:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:20.032 17:52:27 -- pm/common@44 -- $ pid=2108634 00:04:20.032 17:52:27 -- pm/common@50 -- $ kill -TERM 2108634 00:04:20.032 17:52:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.032 17:52:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:20.032 17:52:27 -- pm/common@44 -- $ pid=2108636 00:04:20.032 17:52:27 -- pm/common@50 -- $ kill -TERM 2108636 00:04:20.032 17:52:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.032 17:52:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:20.032 17:52:27 -- pm/common@44 -- $ pid=2108638 00:04:20.032 17:52:27 -- pm/common@50 -- $ kill -TERM 2108638 00:04:20.032 17:52:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.032 17:52:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:20.032 17:52:27 -- pm/common@44 -- $ pid=2108665 00:04:20.032 17:52:27 -- pm/common@50 -- $ sudo -E kill -TERM 2108665 00:04:20.032 17:52:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.032 17:52:27 -- nvmf/common.sh@7 -- # uname -s 00:04:20.032 17:52:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.032 17:52:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.032 17:52:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.032 17:52:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.032 17:52:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.032 17:52:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.032 17:52:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.032 17:52:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.032 17:52:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.032 17:52:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.032 17:52:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:20.032 17:52:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:20.032 17:52:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.032 17:52:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.033 17:52:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:20.033 17:52:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.033 17:52:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.033 17:52:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.033 17:52:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.033 17:52:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.033 17:52:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.033 17:52:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.033 17:52:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.033 17:52:27 -- paths/export.sh@5 -- # export PATH 00:04:20.033 17:52:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.033 17:52:27 -- nvmf/common.sh@47 -- # : 0 00:04:20.033 17:52:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:20.033 17:52:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:20.033 17:52:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.033 17:52:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.033 17:52:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.033 17:52:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:20.033 17:52:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:20.033 17:52:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:20.033 17:52:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:20.033 17:52:27 -- spdk/autotest.sh@32 -- # uname -s 00:04:20.033 17:52:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:20.033 17:52:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:20.033 17:52:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:20.033 17:52:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:20.033 17:52:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:20.033 17:52:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:20.033 17:52:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:20.033 17:52:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:20.033 17:52:27 -- spdk/autotest.sh@48 -- # udevadm_pid=2185115 00:04:20.033 17:52:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:20.033 17:52:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:20.033 17:52:27 -- pm/common@17 -- # local monitor 00:04:20.033 17:52:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.033 17:52:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.033 17:52:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.033 17:52:27 -- pm/common@21 -- # date +%s 00:04:20.033 17:52:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.033 17:52:27 -- pm/common@21 -- # date +%s 00:04:20.033 17:52:27 -- pm/common@25 -- # sleep 1 00:04:20.033 17:52:27 -- pm/common@21 -- # date +%s 00:04:20.033 17:52:27 -- pm/common@21 -- # date +%s 00:04:20.033 17:52:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721749947 00:04:20.033 17:52:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721749947 00:04:20.033 17:52:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721749947 00:04:20.033 17:52:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721749947 00:04:20.033 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721749947_collect-vmstat.pm.log 00:04:20.033 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721749947_collect-cpu-load.pm.log 00:04:20.033 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721749947_collect-cpu-temp.pm.log 00:04:20.033 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721749947_collect-bmc-pm.bmc.pm.log 00:04:20.968 17:52:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:20.968 17:52:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:20.968 17:52:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.968 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.968 17:52:28 -- spdk/autotest.sh@59 -- # create_test_list 00:04:20.968 17:52:28 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:20.968 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:04:21.226 17:52:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:21.226 17:52:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.226 17:52:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.226 17:52:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:21.226 17:52:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.226 17:52:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:21.226 17:52:28 -- common/autotest_common.sh@1455 -- # uname 00:04:21.226 17:52:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:21.226 17:52:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:21.226 17:52:28 -- common/autotest_common.sh@1475 -- # uname 00:04:21.226 17:52:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:21.226 17:52:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:21.226 17:52:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:21.226 17:52:28 -- spdk/autotest.sh@72 -- # hash lcov 00:04:21.226 17:52:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:21.226 17:52:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:21.226 --rc lcov_branch_coverage=1 00:04:21.226 --rc lcov_function_coverage=1 00:04:21.226 --rc genhtml_branch_coverage=1 00:04:21.226 --rc genhtml_function_coverage=1 00:04:21.226 --rc genhtml_legend=1 00:04:21.226 --rc geninfo_all_blocks=1 00:04:21.226 ' 00:04:21.226 17:52:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:21.226 --rc lcov_branch_coverage=1 00:04:21.226 --rc lcov_function_coverage=1 00:04:21.226 --rc genhtml_branch_coverage=1 00:04:21.226 --rc genhtml_function_coverage=1 00:04:21.226 --rc genhtml_legend=1 00:04:21.226 --rc geninfo_all_blocks=1 00:04:21.226 ' 00:04:21.226 17:52:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:21.226 --rc lcov_branch_coverage=1 00:04:21.226 --rc lcov_function_coverage=1 00:04:21.226 --rc genhtml_branch_coverage=1 00:04:21.226 --rc genhtml_function_coverage=1 00:04:21.226 --rc genhtml_legend=1 00:04:21.226 --rc geninfo_all_blocks=1 00:04:21.226 --no-external' 00:04:21.226 17:52:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:21.226 --rc lcov_branch_coverage=1 00:04:21.226 --rc lcov_function_coverage=1 00:04:21.226 --rc genhtml_branch_coverage=1 00:04:21.226 --rc genhtml_function_coverage=1 00:04:21.226 --rc genhtml_legend=1 00:04:21.226 --rc geninfo_all_blocks=1 00:04:21.226 --no-external' 00:04:21.226 17:52:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:21.226 lcov: LCOV version 1.14 00:04:21.226 17:52:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:39.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.380 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:51.603 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:51.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:51.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:51.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:54.141 17:53:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:54.141 17:53:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.141 17:53:01 -- common/autotest_common.sh@10 -- # set +x 00:04:54.141 17:53:01 -- spdk/autotest.sh@91 -- # rm -f 00:04:54.141 17:53:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.518 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:55.518 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:55.518 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:55.518 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:55.518 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:55.518 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:55.518 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:55.518 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:55.518 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:55.518 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:55.518 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:55.518 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:55.518 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:55.518 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:55.518 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:55.518 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:55.518 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:55.778 17:53:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:55.778 17:53:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:55.778 17:53:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:55.778 17:53:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:55.778 17:53:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:55.778 17:53:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:55.778 17:53:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:55.778 17:53:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.778 17:53:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:55.778 17:53:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:55.778 17:53:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.778 17:53:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:55.778 17:53:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:55.778 17:53:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:55.778 17:53:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:55.778 No valid GPT data, bailing 00:04:55.778 17:53:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.778 17:53:03 -- scripts/common.sh@391 -- # pt= 00:04:55.778 17:53:03 -- scripts/common.sh@392 -- # return 1 00:04:55.778 17:53:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:55.778 1+0 records in 00:04:55.778 1+0 records out 00:04:55.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00183272 s, 572 MB/s 00:04:55.778 17:53:03 -- spdk/autotest.sh@118 -- # sync 00:04:55.778 17:53:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.778 17:53:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.778 17:53:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:57.685 17:53:05 -- spdk/autotest.sh@124 -- # uname -s 00:04:57.685 17:53:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:57.685 17:53:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:57.685 17:53:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.685 17:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.685 17:53:05 -- common/autotest_common.sh@10 -- # set +x 00:04:57.685 ************************************ 00:04:57.685 START TEST setup.sh 00:04:57.685 ************************************ 00:04:57.685 17:53:05 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:57.685 * Looking for test storage... 00:04:57.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:57.685 17:53:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:57.685 17:53:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:57.685 17:53:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:57.685 17:53:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.685 17:53:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.685 17:53:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.685 ************************************ 00:04:57.685 START TEST acl 00:04:57.685 ************************************ 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:57.685 * Looking for test storage... 00:04:57.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.685 17:53:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:57.685 17:53:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:57.685 17:53:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.685 17:53:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.595 17:53:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:59.595 17:53:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:59.595 17:53:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.595 17:53:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:59.595 17:53:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.595 17:53:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:00.535 Hugepages 00:05:00.535 node hugesize free / total 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 00:05:00.535 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.535 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:00.536 17:53:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:00.536 17:53:08 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.536 17:53:08 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.536 17:53:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:00.795 ************************************ 00:05:00.795 START TEST denied 00:05:00.795 ************************************ 00:05:00.795 17:53:08 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:00.795 17:53:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:05:00.795 17:53:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:00.795 17:53:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:05:00.795 17:53:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.795 17:53:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.174 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.174 17:53:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.711 00:05:04.711 real 0m4.017s 00:05:04.711 user 0m1.163s 00:05:04.711 sys 0m1.898s 00:05:04.711 17:53:12 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.711 17:53:12 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:04.711 ************************************ 00:05:04.711 END TEST denied 00:05:04.711 ************************************ 00:05:04.711 17:53:12 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:04.711 17:53:12 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:04.712 17:53:12 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.712 17:53:12 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.712 17:53:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:04.712 ************************************ 00:05:04.712 START TEST allowed 00:05:04.712 ************************************ 00:05:04.712 17:53:12 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:04.712 17:53:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:05:04.712 17:53:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:04.712 17:53:12 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:05:04.712 17:53:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.712 17:53:12 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.246 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.246 17:53:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:07.246 17:53:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:07.246 17:53:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:07.246 17:53:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.246 17:53:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.210 00:05:09.210 real 0m4.100s 00:05:09.210 user 0m1.100s 00:05:09.210 sys 0m1.840s 00:05:09.210 17:53:16 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.210 17:53:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:09.210 ************************************ 00:05:09.210 END TEST allowed 00:05:09.210 ************************************ 00:05:09.211 17:53:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:09.211 00:05:09.211 real 0m11.121s 00:05:09.211 user 0m3.506s 00:05:09.211 sys 0m5.576s 00:05:09.211 17:53:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.211 17:53:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:09.211 ************************************ 00:05:09.211 END TEST acl 00:05:09.211 ************************************ 00:05:09.211 17:53:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:09.211 17:53:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:09.211 17:53:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.211 17:53:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.211 17:53:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.211 ************************************ 00:05:09.211 START TEST hugepages 00:05:09.211 ************************************ 00:05:09.211 17:53:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:09.211 * Looking for test storage... 00:05:09.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 41711752 kB' 'MemAvailable: 45196960 kB' 'Buffers: 2704 kB' 'Cached: 12322436 kB' 'SwapCached: 0 kB' 'Active: 9279352 kB' 'Inactive: 3493372 kB' 'Active(anon): 8890864 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451300 kB' 'Mapped: 186092 kB' 'Shmem: 8443280 kB' 'KReclaimable: 194372 kB' 'Slab: 555348 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360976 kB' 'KernelStack: 12624 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 9986636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.211 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:09.212 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:09.213 17:53:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:09.213 17:53:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.213 17:53:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.213 17:53:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.213 ************************************ 00:05:09.213 START TEST default_setup 00:05:09.213 ************************************ 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.213 17:53:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.150 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.150 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.150 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.150 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.410 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.410 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.410 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.410 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.410 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.350 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43808952 kB' 'MemAvailable: 47294136 kB' 'Buffers: 2704 kB' 'Cached: 12322520 kB' 'SwapCached: 0 kB' 'Active: 9296948 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908460 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468356 kB' 'Mapped: 186256 kB' 'Shmem: 8443364 kB' 'KReclaimable: 194324 kB' 'Slab: 554764 kB' 'SReclaimable: 194324 kB' 'SUnreclaim: 360440 kB' 'KernelStack: 12656 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10003808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196356 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.350 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.351 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:11.613 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.613 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.613 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:11.613 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.613 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43811648 kB' 'MemAvailable: 47296832 kB' 'Buffers: 2704 kB' 'Cached: 12322524 kB' 'SwapCached: 0 kB' 'Active: 9297092 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908604 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468600 kB' 'Mapped: 186216 kB' 'Shmem: 8443368 kB' 'KReclaimable: 194324 kB' 'Slab: 554760 kB' 'SReclaimable: 194324 kB' 'SUnreclaim: 360436 kB' 'KernelStack: 12640 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10003456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196292 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.614 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:11.615 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43812060 kB' 'MemAvailable: 47297244 kB' 'Buffers: 2704 kB' 'Cached: 12322540 kB' 'SwapCached: 0 kB' 'Active: 9297296 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908808 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468720 kB' 'Mapped: 185968 kB' 'Shmem: 8443384 kB' 'KReclaimable: 194324 kB' 'Slab: 554776 kB' 'SReclaimable: 194324 kB' 'SUnreclaim: 360452 kB' 'KernelStack: 12688 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10003476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196276 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.616 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.617 nr_hugepages=1024 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.617 resv_hugepages=0 00:05:11.617 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.617 surplus_hugepages=0 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.618 anon_hugepages=0 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43812060 kB' 'MemAvailable: 47297244 kB' 'Buffers: 2704 kB' 'Cached: 12322564 kB' 'SwapCached: 0 kB' 'Active: 9296856 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908368 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468192 kB' 'Mapped: 186140 kB' 'Shmem: 8443408 kB' 'KReclaimable: 194324 kB' 'Slab: 554776 kB' 'SReclaimable: 194324 kB' 'SUnreclaim: 360452 kB' 'KernelStack: 12624 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10003504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196276 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.618 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.619 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20039680 kB' 'MemUsed: 12837260 kB' 'SwapCached: 0 kB' 'Active: 6338544 kB' 'Inactive: 3258956 kB' 'Active(anon): 6212728 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381520 kB' 'Mapped: 53788 kB' 'AnonPages: 219144 kB' 'Shmem: 5996748 kB' 'KernelStack: 7768 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103688 kB' 'Slab: 340384 kB' 'SReclaimable: 103688 kB' 'SUnreclaim: 236696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.620 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.621 node0=1024 expecting 1024 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.621 00:05:11.621 real 0m2.544s 00:05:11.621 user 0m0.700s 00:05:11.621 sys 0m0.922s 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.621 17:53:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:11.621 ************************************ 00:05:11.621 END TEST default_setup 00:05:11.621 ************************************ 00:05:11.621 17:53:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:11.621 17:53:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:11.621 17:53:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.621 17:53:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.621 17:53:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.621 ************************************ 00:05:11.621 START TEST per_node_1G_alloc 00:05:11.621 ************************************ 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:11.621 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:11.622 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:11.622 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:11.622 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.622 17:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.005 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:13.005 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:13.005 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:13.005 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:13.005 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:13.005 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:13.005 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:13.005 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.005 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.005 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:13.005 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:13.005 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:13.005 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:13.005 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:13.005 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:13.005 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.005 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.005 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43804720 kB' 'MemAvailable: 47289928 kB' 'Buffers: 2704 kB' 'Cached: 12322644 kB' 'SwapCached: 0 kB' 'Active: 9297300 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908812 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468576 kB' 'Mapped: 186172 kB' 'Shmem: 8443488 kB' 'KReclaimable: 194372 kB' 'Slab: 554936 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360564 kB' 'KernelStack: 12640 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.006 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43805172 kB' 'MemAvailable: 47290380 kB' 'Buffers: 2704 kB' 'Cached: 12322648 kB' 'SwapCached: 0 kB' 'Active: 9297628 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909140 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468940 kB' 'Mapped: 186152 kB' 'Shmem: 8443492 kB' 'KReclaimable: 194372 kB' 'Slab: 554884 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360512 kB' 'KernelStack: 12672 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.007 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.008 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43805608 kB' 'MemAvailable: 47290816 kB' 'Buffers: 2704 kB' 'Cached: 12322668 kB' 'SwapCached: 0 kB' 'Active: 9297612 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909124 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468916 kB' 'Mapped: 186152 kB' 'Shmem: 8443512 kB' 'KReclaimable: 194372 kB' 'Slab: 555012 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360640 kB' 'KernelStack: 12672 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.009 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.010 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.011 nr_hugepages=1024 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.011 resv_hugepages=0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.011 surplus_hugepages=0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.011 anon_hugepages=0 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43805608 kB' 'MemAvailable: 47290816 kB' 'Buffers: 2704 kB' 'Cached: 12322688 kB' 'SwapCached: 0 kB' 'Active: 9297656 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909168 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468916 kB' 'Mapped: 186152 kB' 'Shmem: 8443532 kB' 'KReclaimable: 194372 kB' 'Slab: 555012 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360640 kB' 'KernelStack: 12672 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.011 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.012 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21078188 kB' 'MemUsed: 11798752 kB' 'SwapCached: 0 kB' 'Active: 6339388 kB' 'Inactive: 3258956 kB' 'Active(anon): 6213572 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381528 kB' 'Mapped: 53352 kB' 'AnonPages: 219972 kB' 'Shmem: 5996756 kB' 'KernelStack: 7800 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103736 kB' 'Slab: 340316 kB' 'SReclaimable: 103736 kB' 'SUnreclaim: 236580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.013 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.014 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22728100 kB' 'MemUsed: 4936680 kB' 'SwapCached: 0 kB' 'Active: 2958144 kB' 'Inactive: 234416 kB' 'Active(anon): 2695472 kB' 'Inactive(anon): 0 kB' 'Active(file): 262672 kB' 'Inactive(file): 234416 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2943908 kB' 'Mapped: 132800 kB' 'AnonPages: 248692 kB' 'Shmem: 2446820 kB' 'KernelStack: 4856 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90636 kB' 'Slab: 214696 kB' 'SReclaimable: 90636 kB' 'SUnreclaim: 124060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.276 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.277 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:13.278 node0=512 expecting 512 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:13.278 node1=512 expecting 512 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:13.278 00:05:13.278 real 0m1.532s 00:05:13.278 user 0m0.609s 00:05:13.278 sys 0m0.875s 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.278 17:53:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.278 ************************************ 00:05:13.278 END TEST per_node_1G_alloc 00:05:13.278 ************************************ 00:05:13.278 17:53:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:13.278 17:53:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:13.278 17:53:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.278 17:53:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.278 17:53:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.278 ************************************ 00:05:13.278 START TEST even_2G_alloc 00:05:13.278 ************************************ 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.278 17:53:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.659 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.659 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.659 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.659 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.659 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.659 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.659 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.659 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.659 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.659 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.659 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.659 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.659 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.659 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.659 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.659 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.659 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.659 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43801940 kB' 'MemAvailable: 47287148 kB' 'Buffers: 2704 kB' 'Cached: 12322776 kB' 'SwapCached: 0 kB' 'Active: 9298224 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909736 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469288 kB' 'Mapped: 186300 kB' 'Shmem: 8443620 kB' 'KReclaimable: 194372 kB' 'Slab: 555084 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360712 kB' 'KernelStack: 12656 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.660 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43803232 kB' 'MemAvailable: 47288440 kB' 'Buffers: 2704 kB' 'Cached: 12322776 kB' 'SwapCached: 0 kB' 'Active: 9298476 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909988 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469520 kB' 'Mapped: 186244 kB' 'Shmem: 8443620 kB' 'KReclaimable: 194372 kB' 'Slab: 555068 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360696 kB' 'KernelStack: 12688 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.661 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.662 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43803484 kB' 'MemAvailable: 47288692 kB' 'Buffers: 2704 kB' 'Cached: 12322800 kB' 'SwapCached: 0 kB' 'Active: 9298000 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909512 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469028 kB' 'Mapped: 186168 kB' 'Shmem: 8443644 kB' 'KReclaimable: 194372 kB' 'Slab: 555044 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360672 kB' 'KernelStack: 12688 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.663 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.664 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.665 nr_hugepages=1024 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.665 resv_hugepages=0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.665 surplus_hugepages=0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.665 anon_hugepages=0 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.665 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43803868 kB' 'MemAvailable: 47289076 kB' 'Buffers: 2704 kB' 'Cached: 12322804 kB' 'SwapCached: 0 kB' 'Active: 9298216 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468828 kB' 'Mapped: 186168 kB' 'Shmem: 8443648 kB' 'KReclaimable: 194372 kB' 'Slab: 555044 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 360672 kB' 'KernelStack: 12672 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10004684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.666 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21070216 kB' 'MemUsed: 11806724 kB' 'SwapCached: 0 kB' 'Active: 6339208 kB' 'Inactive: 3258956 kB' 'Active(anon): 6213392 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381532 kB' 'Mapped: 53356 kB' 'AnonPages: 219716 kB' 'Shmem: 5996760 kB' 'KernelStack: 7784 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103720 kB' 'Slab: 340240 kB' 'SReclaimable: 103720 kB' 'SUnreclaim: 236520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.667 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.668 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22734028 kB' 'MemUsed: 4930752 kB' 'SwapCached: 0 kB' 'Active: 2958620 kB' 'Inactive: 234416 kB' 'Active(anon): 2695948 kB' 'Inactive(anon): 0 kB' 'Active(file): 262672 kB' 'Inactive(file): 234416 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2944036 kB' 'Mapped: 132812 kB' 'AnonPages: 249104 kB' 'Shmem: 2446948 kB' 'KernelStack: 4888 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90636 kB' 'Slab: 214788 kB' 'SReclaimable: 90636 kB' 'SUnreclaim: 124152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.669 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.670 node0=512 expecting 512 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:14.670 node1=512 expecting 512 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:14.670 00:05:14.670 real 0m1.506s 00:05:14.670 user 0m0.642s 00:05:14.670 sys 0m0.821s 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.670 17:53:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.670 ************************************ 00:05:14.670 END TEST even_2G_alloc 00:05:14.670 ************************************ 00:05:14.670 17:53:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:14.670 17:53:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:14.670 17:53:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.670 17:53:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.670 17:53:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.670 ************************************ 00:05:14.670 START TEST odd_alloc 00:05:14.670 ************************************ 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:14.670 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.671 17:53:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.050 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.050 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:16.050 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.050 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.050 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.050 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.050 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.050 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.050 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.050 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.050 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.050 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.050 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.050 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.050 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.050 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.050 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43815044 kB' 'MemAvailable: 47300224 kB' 'Buffers: 2704 kB' 'Cached: 12322912 kB' 'SwapCached: 0 kB' 'Active: 9294700 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906212 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465652 kB' 'Mapped: 185136 kB' 'Shmem: 8443756 kB' 'KReclaimable: 194316 kB' 'Slab: 555276 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360960 kB' 'KernelStack: 12592 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9991580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.050 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43815044 kB' 'MemAvailable: 47300224 kB' 'Buffers: 2704 kB' 'Cached: 12322912 kB' 'SwapCached: 0 kB' 'Active: 9294616 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906128 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465596 kB' 'Mapped: 185160 kB' 'Shmem: 8443756 kB' 'KReclaimable: 194316 kB' 'Slab: 555292 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360976 kB' 'KernelStack: 12544 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9991596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.051 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43815460 kB' 'MemAvailable: 47300640 kB' 'Buffers: 2704 kB' 'Cached: 12322932 kB' 'SwapCached: 0 kB' 'Active: 9294552 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906064 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465496 kB' 'Mapped: 185084 kB' 'Shmem: 8443776 kB' 'KReclaimable: 194316 kB' 'Slab: 555300 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360984 kB' 'KernelStack: 12592 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9991616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.052 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:16.053 nr_hugepages=1025 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.053 resv_hugepages=0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.053 surplus_hugepages=0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.053 anon_hugepages=0 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43812184 kB' 'MemAvailable: 47297364 kB' 'Buffers: 2704 kB' 'Cached: 12322940 kB' 'SwapCached: 0 kB' 'Active: 9297460 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908972 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468432 kB' 'Mapped: 185524 kB' 'Shmem: 8443784 kB' 'KReclaimable: 194316 kB' 'Slab: 555300 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360984 kB' 'KernelStack: 12608 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9995236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196356 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.053 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.054 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21062988 kB' 'MemUsed: 11813952 kB' 'SwapCached: 0 kB' 'Active: 6336624 kB' 'Inactive: 3258956 kB' 'Active(anon): 6210808 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381544 kB' 'Mapped: 52800 kB' 'AnonPages: 217120 kB' 'Shmem: 5996772 kB' 'KernelStack: 7720 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103680 kB' 'Slab: 340180 kB' 'SReclaimable: 103680 kB' 'SUnreclaim: 236500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22745416 kB' 'MemUsed: 4919364 kB' 'SwapCached: 0 kB' 'Active: 2957760 kB' 'Inactive: 234416 kB' 'Active(anon): 2695088 kB' 'Inactive(anon): 0 kB' 'Active(file): 262672 kB' 'Inactive(file): 234416 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2944156 kB' 'Mapped: 132876 kB' 'AnonPages: 248144 kB' 'Shmem: 2447068 kB' 'KernelStack: 4888 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90636 kB' 'Slab: 215108 kB' 'SReclaimable: 90636 kB' 'SUnreclaim: 124472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.313 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:16.314 node0=512 expecting 513 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:16.314 node1=513 expecting 512 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:16.314 00:05:16.314 real 0m1.471s 00:05:16.314 user 0m0.556s 00:05:16.314 sys 0m0.877s 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.314 17:53:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.314 ************************************ 00:05:16.314 END TEST odd_alloc 00:05:16.314 ************************************ 00:05:16.314 17:53:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:16.314 17:53:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:16.314 17:53:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.314 17:53:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.314 17:53:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.314 ************************************ 00:05:16.314 START TEST custom_alloc 00:05:16.314 ************************************ 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.314 17:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.695 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.695 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:17.695 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.695 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.695 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.695 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.695 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.695 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.695 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.695 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.695 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.695 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.695 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.695 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.695 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.695 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.695 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.695 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42766908 kB' 'MemAvailable: 46252088 kB' 'Buffers: 2704 kB' 'Cached: 12323040 kB' 'SwapCached: 0 kB' 'Active: 9295628 kB' 'Inactive: 3493372 kB' 'Active(anon): 8907140 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465596 kB' 'Mapped: 185284 kB' 'Shmem: 8443884 kB' 'KReclaimable: 194316 kB' 'Slab: 555032 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360716 kB' 'KernelStack: 12608 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9991704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.696 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42767188 kB' 'MemAvailable: 46252368 kB' 'Buffers: 2704 kB' 'Cached: 12323044 kB' 'SwapCached: 0 kB' 'Active: 9295228 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906740 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465668 kB' 'Mapped: 185184 kB' 'Shmem: 8443888 kB' 'KReclaimable: 194316 kB' 'Slab: 555012 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360696 kB' 'KernelStack: 12592 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9991720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.697 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.698 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42766780 kB' 'MemAvailable: 46251960 kB' 'Buffers: 2704 kB' 'Cached: 12323048 kB' 'SwapCached: 0 kB' 'Active: 9294940 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906452 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465816 kB' 'Mapped: 185104 kB' 'Shmem: 8443892 kB' 'KReclaimable: 194316 kB' 'Slab: 555012 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360696 kB' 'KernelStack: 12624 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9991740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:17.701 nr_hugepages=1536 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.701 resv_hugepages=0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.701 surplus_hugepages=0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.701 anon_hugepages=0 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42767428 kB' 'MemAvailable: 46252608 kB' 'Buffers: 2704 kB' 'Cached: 12323084 kB' 'SwapCached: 0 kB' 'Active: 9294604 kB' 'Inactive: 3493372 kB' 'Active(anon): 8906116 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465400 kB' 'Mapped: 185104 kB' 'Shmem: 8443928 kB' 'KReclaimable: 194316 kB' 'Slab: 555012 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360696 kB' 'KernelStack: 12608 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9991764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.701 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.702 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21074048 kB' 'MemUsed: 11802892 kB' 'SwapCached: 0 kB' 'Active: 6336964 kB' 'Inactive: 3258956 kB' 'Active(anon): 6211148 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381576 kB' 'Mapped: 52304 kB' 'AnonPages: 217432 kB' 'Shmem: 5996804 kB' 'KernelStack: 7752 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103680 kB' 'Slab: 340084 kB' 'SReclaimable: 103680 kB' 'SUnreclaim: 236404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.703 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.704 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 21694172 kB' 'MemUsed: 5970608 kB' 'SwapCached: 0 kB' 'Active: 2957776 kB' 'Inactive: 234416 kB' 'Active(anon): 2695104 kB' 'Inactive(anon): 0 kB' 'Active(file): 262672 kB' 'Inactive(file): 234416 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2944248 kB' 'Mapped: 132736 kB' 'AnonPages: 248084 kB' 'Shmem: 2447160 kB' 'KernelStack: 4840 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90636 kB' 'Slab: 214920 kB' 'SReclaimable: 90636 kB' 'SUnreclaim: 124284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.705 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.705 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.705 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.705 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.965 node0=512 expecting 512 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:17.965 node1=1024 expecting 1024 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:17.965 00:05:17.965 real 0m1.565s 00:05:17.965 user 0m0.655s 00:05:17.965 sys 0m0.872s 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.965 17:53:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.965 ************************************ 00:05:17.965 END TEST custom_alloc 00:05:17.965 ************************************ 00:05:17.966 17:53:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:17.966 17:53:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:17.966 17:53:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.966 17:53:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.966 17:53:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.966 ************************************ 00:05:17.966 START TEST no_shrink_alloc 00:05:17.966 ************************************ 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.966 17:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.347 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.347 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:19.347 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.347 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.347 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.347 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.347 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.347 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.347 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.347 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.347 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.347 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.347 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.347 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.347 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.347 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.347 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.347 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43777920 kB' 'MemAvailable: 47263100 kB' 'Buffers: 2704 kB' 'Cached: 12323176 kB' 'SwapCached: 0 kB' 'Active: 9295520 kB' 'Inactive: 3493372 kB' 'Active(anon): 8907032 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465772 kB' 'Mapped: 185196 kB' 'Shmem: 8444020 kB' 'KReclaimable: 194316 kB' 'Slab: 555072 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360756 kB' 'KernelStack: 12608 kB' 'PageTables: 7320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9994660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.348 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43778340 kB' 'MemAvailable: 47263520 kB' 'Buffers: 2704 kB' 'Cached: 12323180 kB' 'SwapCached: 0 kB' 'Active: 9296108 kB' 'Inactive: 3493372 kB' 'Active(anon): 8907620 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466796 kB' 'Mapped: 185188 kB' 'Shmem: 8444024 kB' 'KReclaimable: 194316 kB' 'Slab: 555072 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360756 kB' 'KernelStack: 12896 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9994680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.349 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.350 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43776976 kB' 'MemAvailable: 47262156 kB' 'Buffers: 2704 kB' 'Cached: 12323180 kB' 'SwapCached: 0 kB' 'Active: 9297708 kB' 'Inactive: 3493372 kB' 'Active(anon): 8909220 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468372 kB' 'Mapped: 185188 kB' 'Shmem: 8444024 kB' 'KReclaimable: 194316 kB' 'Slab: 555072 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360756 kB' 'KernelStack: 12944 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9994700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.351 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.352 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.353 nr_hugepages=1024 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.353 resv_hugepages=0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.353 surplus_hugepages=0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.353 anon_hugepages=0 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43779324 kB' 'MemAvailable: 47264504 kB' 'Buffers: 2704 kB' 'Cached: 12323220 kB' 'SwapCached: 0 kB' 'Active: 9295556 kB' 'Inactive: 3493372 kB' 'Active(anon): 8907068 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466140 kB' 'Mapped: 185164 kB' 'Shmem: 8444064 kB' 'KReclaimable: 194316 kB' 'Slab: 555136 kB' 'SReclaimable: 194316 kB' 'SUnreclaim: 360820 kB' 'KernelStack: 12656 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9994724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.353 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.354 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20019596 kB' 'MemUsed: 12857344 kB' 'SwapCached: 0 kB' 'Active: 6337140 kB' 'Inactive: 3258956 kB' 'Active(anon): 6211324 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381584 kB' 'Mapped: 52364 kB' 'AnonPages: 217628 kB' 'Shmem: 5996812 kB' 'KernelStack: 7688 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103680 kB' 'Slab: 340248 kB' 'SReclaimable: 103680 kB' 'SUnreclaim: 236568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.355 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.356 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.357 node0=1024 expecting 1024 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.357 17:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.738 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:20.738 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:20.738 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:20.738 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:20.738 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:20.738 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:20.738 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:20.738 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:20.738 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:20.738 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:20.738 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:20.738 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:20.738 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:20.738 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:20.738 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:20.738 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:20.738 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:20.738 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43766800 kB' 'MemAvailable: 47251964 kB' 'Buffers: 2704 kB' 'Cached: 12323288 kB' 'SwapCached: 0 kB' 'Active: 9299000 kB' 'Inactive: 3493372 kB' 'Active(anon): 8910512 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469632 kB' 'Mapped: 185776 kB' 'Shmem: 8444132 kB' 'KReclaimable: 194284 kB' 'Slab: 554720 kB' 'SReclaimable: 194284 kB' 'SUnreclaim: 360436 kB' 'KernelStack: 12576 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9997360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.738 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.739 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43766800 kB' 'MemAvailable: 47251964 kB' 'Buffers: 2704 kB' 'Cached: 12323292 kB' 'SwapCached: 0 kB' 'Active: 9300852 kB' 'Inactive: 3493372 kB' 'Active(anon): 8912364 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471524 kB' 'Mapped: 186068 kB' 'Shmem: 8444136 kB' 'KReclaimable: 194284 kB' 'Slab: 554720 kB' 'SReclaimable: 194284 kB' 'SUnreclaim: 360436 kB' 'KernelStack: 12624 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9998708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196312 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.740 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.741 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43769436 kB' 'MemAvailable: 47254600 kB' 'Buffers: 2704 kB' 'Cached: 12323296 kB' 'SwapCached: 0 kB' 'Active: 9297120 kB' 'Inactive: 3493372 kB' 'Active(anon): 8908632 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467868 kB' 'Mapped: 185964 kB' 'Shmem: 8444140 kB' 'KReclaimable: 194284 kB' 'Slab: 554708 kB' 'SReclaimable: 194284 kB' 'SUnreclaim: 360424 kB' 'KernelStack: 12656 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9994760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.742 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.743 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.004 nr_hugepages=1024 00:05:21.004 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.004 resv_hugepages=0 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.005 surplus_hugepages=0 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.005 anon_hugepages=0 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43765428 kB' 'MemAvailable: 47250592 kB' 'Buffers: 2704 kB' 'Cached: 12323316 kB' 'SwapCached: 0 kB' 'Active: 9300368 kB' 'Inactive: 3493372 kB' 'Active(anon): 8911880 kB' 'Inactive(anon): 0 kB' 'Active(file): 388488 kB' 'Inactive(file): 3493372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470992 kB' 'Mapped: 185556 kB' 'Shmem: 8444160 kB' 'KReclaimable: 194284 kB' 'Slab: 554688 kB' 'SReclaimable: 194284 kB' 'SUnreclaim: 360404 kB' 'KernelStack: 12592 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9998756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196296 kB' 'VmallocChunk: 0 kB' 'Percpu: 31680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1789532 kB' 'DirectMap2M: 14907392 kB' 'DirectMap1G: 52428800 kB' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.005 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.006 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.007 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20018596 kB' 'MemUsed: 12858344 kB' 'SwapCached: 0 kB' 'Active: 6337896 kB' 'Inactive: 3258956 kB' 'Active(anon): 6212080 kB' 'Inactive(anon): 0 kB' 'Active(file): 125816 kB' 'Inactive(file): 3258956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9381660 kB' 'Mapped: 53052 kB' 'AnonPages: 218332 kB' 'Shmem: 5996888 kB' 'KernelStack: 7704 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103680 kB' 'Slab: 340040 kB' 'SReclaimable: 103680 kB' 'SUnreclaim: 236360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.008 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.009 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.010 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.010 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.010 node0=1024 expecting 1024 00:05:21.010 17:53:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.010 00:05:21.010 real 0m3.028s 00:05:21.010 user 0m1.207s 00:05:21.010 sys 0m1.747s 00:05:21.010 17:53:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.010 17:53:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.010 ************************************ 00:05:21.010 END TEST no_shrink_alloc 00:05:21.010 ************************************ 00:05:21.010 17:53:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:21.010 17:53:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:21.010 00:05:21.010 real 0m12.040s 00:05:21.010 user 0m4.540s 00:05:21.010 sys 0m6.360s 00:05:21.010 17:53:28 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.010 17:53:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.010 ************************************ 00:05:21.010 END TEST hugepages 00:05:21.010 ************************************ 00:05:21.010 17:53:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:21.010 17:53:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:21.010 17:53:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.010 17:53:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.010 17:53:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.010 ************************************ 00:05:21.010 START TEST driver 00:05:21.010 ************************************ 00:05:21.010 17:53:28 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:21.010 * Looking for test storage... 00:05:21.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:21.010 17:53:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:21.010 17:53:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.010 17:53:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.547 17:53:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:23.547 17:53:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.547 17:53:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.547 17:53:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:23.807 ************************************ 00:05:23.807 START TEST guess_driver 00:05:23.807 ************************************ 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:23.807 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:23.807 Looking for driver=vfio-pci 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.807 17:53:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.193 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.194 17:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.133 17:53:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.417 00:05:29.417 real 0m5.143s 00:05:29.417 user 0m1.190s 00:05:29.417 sys 0m1.965s 00:05:29.417 17:53:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.417 17:53:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:29.417 ************************************ 00:05:29.417 END TEST guess_driver 00:05:29.417 ************************************ 00:05:29.417 17:53:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:29.417 00:05:29.417 real 0m7.864s 00:05:29.417 user 0m1.783s 00:05:29.417 sys 0m3.049s 00:05:29.417 17:53:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.417 17:53:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:29.417 ************************************ 00:05:29.417 END TEST driver 00:05:29.417 ************************************ 00:05:29.417 17:53:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:29.417 17:53:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.417 17:53:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.417 17:53:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.417 17:53:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.417 ************************************ 00:05:29.417 START TEST devices 00:05:29.417 ************************************ 00:05:29.417 17:53:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.417 * Looking for test storage... 00:05:29.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.417 17:53:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:29.417 17:53:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:29.417 17:53:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.417 17:53:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.353 17:53:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:30.353 17:53:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:30.353 17:53:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:30.353 17:53:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:30.613 No valid GPT data, bailing 00:05:30.613 17:53:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.613 17:53:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:30.613 17:53:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:30.613 17:53:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:30.613 17:53:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:30.613 17:53:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:30.613 17:53:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:30.613 17:53:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.613 17:53:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.613 17:53:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:30.613 ************************************ 00:05:30.613 START TEST nvme_mount 00:05:30.613 ************************************ 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.613 17:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:31.552 Creating new GPT entries in memory. 00:05:31.552 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.552 other utilities. 00:05:31.552 17:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.552 17:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.552 17:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.552 17:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.552 17:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:32.491 Creating new GPT entries in memory. 00:05:32.491 The operation has completed successfully. 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2205400 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:32.491 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.750 17:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.130 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.131 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.131 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.390 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:34.390 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:34.390 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.390 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.390 17:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.768 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:35.769 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.770 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:35.771 17:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.771 17:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.771 17:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.157 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.157 00:05:37.157 real 0m6.551s 00:05:37.157 user 0m1.499s 00:05:37.157 sys 0m2.623s 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.157 17:53:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:37.157 ************************************ 00:05:37.157 END TEST nvme_mount 00:05:37.157 ************************************ 00:05:37.157 17:53:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:37.157 17:53:44 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:37.157 17:53:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.157 17:53:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.157 17:53:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:37.157 ************************************ 00:05:37.157 START TEST dm_mount 00:05:37.157 ************************************ 00:05:37.157 17:53:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:37.157 17:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:37.158 17:53:44 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:38.096 Creating new GPT entries in memory. 00:05:38.097 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:38.097 other utilities. 00:05:38.097 17:53:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:38.097 17:53:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.097 17:53:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.097 17:53:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.097 17:53:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:39.478 Creating new GPT entries in memory. 00:05:39.478 The operation has completed successfully. 00:05:39.478 17:53:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:39.478 17:53:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.478 17:53:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.478 17:53:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.478 17:53:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:40.416 The operation has completed successfully. 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2207792 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:40.416 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.417 17:53:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.353 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.354 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.614 17:53:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.990 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:42.991 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:42.991 00:05:42.991 real 0m5.878s 00:05:42.991 user 0m1.040s 00:05:42.991 sys 0m1.676s 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.991 17:53:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:42.991 ************************************ 00:05:42.991 END TEST dm_mount 00:05:42.991 ************************************ 00:05:42.991 17:53:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.991 17:53:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.250 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:43.250 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:43.250 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.250 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.250 17:53:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:43.250 00:05:43.250 real 0m14.440s 00:05:43.250 user 0m3.248s 00:05:43.250 sys 0m5.369s 00:05:43.250 17:53:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.250 17:53:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:43.250 ************************************ 00:05:43.250 END TEST devices 00:05:43.250 ************************************ 00:05:43.250 17:53:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:43.250 00:05:43.250 real 0m45.713s 00:05:43.250 user 0m13.189s 00:05:43.250 sys 0m20.506s 00:05:43.250 17:53:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.250 17:53:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:43.250 ************************************ 00:05:43.250 END TEST setup.sh 00:05:43.250 ************************************ 00:05:43.508 17:53:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.508 17:53:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:44.444 Hugepages 00:05:44.444 node hugesize free / total 00:05:44.444 node0 1048576kB 0 / 0 00:05:44.444 node0 2048kB 2048 / 2048 00:05:44.444 node1 1048576kB 0 / 0 00:05:44.444 node1 2048kB 0 / 0 00:05:44.444 00:05:44.444 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.444 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:44.444 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:44.444 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:44.703 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:44.703 17:53:52 -- spdk/autotest.sh@130 -- # uname -s 00:05:44.703 17:53:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:44.703 17:53:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:44.703 17:53:52 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.711 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.711 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.711 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.970 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.970 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.970 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.970 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.970 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.970 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:46.910 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.910 17:53:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:48.285 17:53:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:48.285 17:53:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:48.285 17:53:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.285 17:53:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:48.285 17:53:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:48.285 17:53:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:48.285 17:53:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.285 17:53:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.285 17:53:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:48.285 17:53:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:48.285 17:53:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:48.285 17:53:55 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:49.222 Waiting for block devices as requested 00:05:49.222 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:49.481 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:49.481 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:49.740 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:49.740 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:49.740 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:49.740 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:50.000 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:50.000 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:50.000 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:50.000 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:50.260 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:50.260 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:50.260 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:50.520 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:50.520 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:50.520 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:50.779 17:53:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:50.779 17:53:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:50.779 17:53:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:50.779 17:53:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:50.779 17:53:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:50.779 17:53:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:50.779 17:53:58 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:50.779 17:53:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:50.779 17:53:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:50.779 17:53:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:50.779 17:53:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:50.780 17:53:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:50.780 17:53:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:50.780 17:53:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:50.780 17:53:58 -- common/autotest_common.sh@1557 -- # continue 00:05:50.780 17:53:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:50.780 17:53:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.780 17:53:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.780 17:53:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:50.780 17:53:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.780 17:53:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.780 17:53:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:52.155 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:52.155 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:52.155 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:53.092 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:53.092 17:54:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:53.092 17:54:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.092 17:54:00 -- common/autotest_common.sh@10 -- # set +x 00:05:53.092 17:54:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:53.092 17:54:00 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:53.092 17:54:00 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:53.092 17:54:00 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:53.092 17:54:00 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:53.092 17:54:00 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:53.092 17:54:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:53.092 17:54:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:53.092 17:54:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.092 17:54:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:53.092 17:54:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:53.092 17:54:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:53.092 17:54:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:53.092 17:54:00 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:53.092 17:54:00 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:53.092 17:54:00 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:53.092 17:54:00 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:53.092 17:54:00 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:53.092 17:54:00 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:53.092 17:54:00 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:53.092 17:54:00 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2213147 00:05:53.092 17:54:00 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.092 17:54:00 -- common/autotest_common.sh@1598 -- # waitforlisten 2213147 00:05:53.092 17:54:00 -- common/autotest_common.sh@829 -- # '[' -z 2213147 ']' 00:05:53.092 17:54:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.092 17:54:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.092 17:54:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.092 17:54:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.092 17:54:00 -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 [2024-07-23 17:54:00.785051] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:05:53.349 [2024-07-23 17:54:00.785144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213147 ] 00:05:53.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.349 [2024-07-23 17:54:00.844343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.349 [2024-07-23 17:54:00.932799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.606 17:54:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.606 17:54:01 -- common/autotest_common.sh@862 -- # return 0 00:05:53.606 17:54:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:53.606 17:54:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:53.606 17:54:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:56.892 nvme0n1 00:05:56.892 17:54:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:56.892 [2024-07-23 17:54:04.486507] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:56.892 [2024-07-23 17:54:04.486553] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:56.892 request: 00:05:56.892 { 00:05:56.892 "nvme_ctrlr_name": "nvme0", 00:05:56.892 "password": "test", 00:05:56.892 "method": "bdev_nvme_opal_revert", 00:05:56.892 "req_id": 1 00:05:56.892 } 00:05:56.892 Got JSON-RPC error response 00:05:56.892 response: 00:05:56.892 { 00:05:56.892 "code": -32603, 00:05:56.892 "message": "Internal error" 00:05:56.892 } 00:05:56.892 17:54:04 -- common/autotest_common.sh@1604 -- # true 00:05:56.892 17:54:04 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:56.892 17:54:04 -- common/autotest_common.sh@1608 -- # killprocess 2213147 00:05:56.892 17:54:04 -- common/autotest_common.sh@948 -- # '[' -z 2213147 ']' 00:05:56.892 17:54:04 -- common/autotest_common.sh@952 -- # kill -0 2213147 00:05:56.892 17:54:04 -- common/autotest_common.sh@953 -- # uname 00:05:56.892 17:54:04 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.892 17:54:04 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213147 00:05:56.892 17:54:04 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.892 17:54:04 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.892 17:54:04 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213147' 00:05:56.892 killing process with pid 2213147 00:05:56.892 17:54:04 -- common/autotest_common.sh@967 -- # kill 2213147 00:05:56.892 17:54:04 -- common/autotest_common.sh@972 -- # wait 2213147 00:05:58.792 17:54:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:58.792 17:54:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:58.792 17:54:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:58.792 17:54:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:58.792 17:54:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:58.792 17:54:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.792 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.792 17:54:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:58.792 17:54:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.792 17:54:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.792 17:54:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.792 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.792 ************************************ 00:05:58.792 START TEST env 00:05:58.792 ************************************ 00:05:58.792 17:54:06 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.792 * Looking for test storage... 00:05:58.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:58.792 17:54:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.792 17:54:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.792 17:54:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.792 17:54:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.792 ************************************ 00:05:58.792 START TEST env_memory 00:05:58.792 ************************************ 00:05:58.792 17:54:06 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.792 00:05:58.792 00:05:58.792 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.792 http://cunit.sourceforge.net/ 00:05:58.792 00:05:58.792 00:05:58.792 Suite: memory 00:05:58.792 Test: alloc and free memory map ...[2024-07-23 17:54:06.397111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.792 passed 00:05:58.792 Test: mem map translation ...[2024-07-23 17:54:06.418031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.792 [2024-07-23 17:54:06.418053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.792 [2024-07-23 17:54:06.418103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.792 [2024-07-23 17:54:06.418115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:59.051 passed 00:05:59.051 Test: mem map registration ...[2024-07-23 17:54:06.462631] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:59.051 [2024-07-23 17:54:06.462652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:59.051 passed 00:05:59.051 Test: mem map adjacent registrations ...passed 00:05:59.051 00:05:59.051 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.051 suites 1 1 n/a 0 0 00:05:59.051 tests 4 4 4 0 0 00:05:59.051 asserts 152 152 152 0 n/a 00:05:59.051 00:05:59.051 Elapsed time = 0.150 seconds 00:05:59.051 00:05:59.051 real 0m0.158s 00:05:59.051 user 0m0.150s 00:05:59.051 sys 0m0.007s 00:05:59.051 17:54:06 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.051 17:54:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:59.051 ************************************ 00:05:59.051 END TEST env_memory 00:05:59.051 ************************************ 00:05:59.051 17:54:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.051 17:54:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.051 17:54:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.051 17:54:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.051 17:54:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.051 ************************************ 00:05:59.051 START TEST env_vtophys 00:05:59.051 ************************************ 00:05:59.051 17:54:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:59.051 EAL: lib.eal log level changed from notice to debug 00:05:59.051 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.051 EAL: Detected lcore 1 as core 1 on socket 0 00:05:59.051 EAL: Detected lcore 2 as core 2 on socket 0 00:05:59.051 EAL: Detected lcore 3 as core 3 on socket 0 00:05:59.051 EAL: Detected lcore 4 as core 4 on socket 0 00:05:59.051 EAL: Detected lcore 5 as core 5 on socket 0 00:05:59.051 EAL: Detected lcore 6 as core 8 on socket 0 00:05:59.051 EAL: Detected lcore 7 as core 9 on socket 0 00:05:59.051 EAL: Detected lcore 8 as core 10 on socket 0 00:05:59.051 EAL: Detected lcore 9 as core 11 on socket 0 00:05:59.051 EAL: Detected lcore 10 as core 12 on socket 0 00:05:59.052 EAL: Detected lcore 11 as core 13 on socket 0 00:05:59.052 EAL: Detected lcore 12 as core 0 on socket 1 00:05:59.052 EAL: Detected lcore 13 as core 1 on socket 1 00:05:59.052 EAL: Detected lcore 14 as core 2 on socket 1 00:05:59.052 EAL: Detected lcore 15 as core 3 on socket 1 00:05:59.052 EAL: Detected lcore 16 as core 4 on socket 1 00:05:59.052 EAL: Detected lcore 17 as core 5 on socket 1 00:05:59.052 EAL: Detected lcore 18 as core 8 on socket 1 00:05:59.052 EAL: Detected lcore 19 as core 9 on socket 1 00:05:59.052 EAL: Detected lcore 20 as core 10 on socket 1 00:05:59.052 EAL: Detected lcore 21 as core 11 on socket 1 00:05:59.052 EAL: Detected lcore 22 as core 12 on socket 1 00:05:59.052 EAL: Detected lcore 23 as core 13 on socket 1 00:05:59.052 EAL: Detected lcore 24 as core 0 on socket 0 00:05:59.052 EAL: Detected lcore 25 as core 1 on socket 0 00:05:59.052 EAL: Detected lcore 26 as core 2 on socket 0 00:05:59.052 EAL: Detected lcore 27 as core 3 on socket 0 00:05:59.052 EAL: Detected lcore 28 as core 4 on socket 0 00:05:59.052 EAL: Detected lcore 29 as core 5 on socket 0 00:05:59.052 EAL: Detected lcore 30 as core 8 on socket 0 00:05:59.052 EAL: Detected lcore 31 as core 9 on socket 0 00:05:59.052 EAL: Detected lcore 32 as core 10 on socket 0 00:05:59.052 EAL: Detected lcore 33 as core 11 on socket 0 00:05:59.052 EAL: Detected lcore 34 as core 12 on socket 0 00:05:59.052 EAL: Detected lcore 35 as core 13 on socket 0 00:05:59.052 EAL: Detected lcore 36 as core 0 on socket 1 00:05:59.052 EAL: Detected lcore 37 as core 1 on socket 1 00:05:59.052 EAL: Detected lcore 38 as core 2 on socket 1 00:05:59.052 EAL: Detected lcore 39 as core 3 on socket 1 00:05:59.052 EAL: Detected lcore 40 as core 4 on socket 1 00:05:59.052 EAL: Detected lcore 41 as core 5 on socket 1 00:05:59.052 EAL: Detected lcore 42 as core 8 on socket 1 00:05:59.052 EAL: Detected lcore 43 as core 9 on socket 1 00:05:59.052 EAL: Detected lcore 44 as core 10 on socket 1 00:05:59.052 EAL: Detected lcore 45 as core 11 on socket 1 00:05:59.052 EAL: Detected lcore 46 as core 12 on socket 1 00:05:59.052 EAL: Detected lcore 47 as core 13 on socket 1 00:05:59.052 EAL: Maximum logical cores by configuration: 128 00:05:59.052 EAL: Detected CPU lcores: 48 00:05:59.052 EAL: Detected NUMA nodes: 2 00:05:59.052 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:59.052 EAL: Detected shared linkage of DPDK 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:59.052 EAL: Registered [vdev] bus. 00:05:59.052 EAL: bus.vdev log level changed from disabled to notice 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:59.052 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:59.052 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:59.052 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:59.052 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: Bus pci wants IOVA as 'DC' 00:05:59.052 EAL: Bus vdev wants IOVA as 'DC' 00:05:59.052 EAL: Buses did not request a specific IOVA mode. 00:05:59.052 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:59.052 EAL: Selected IOVA mode 'VA' 00:05:59.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.052 EAL: Probing VFIO support... 00:05:59.052 EAL: IOMMU type 1 (Type 1) is supported 00:05:59.052 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:59.052 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:59.052 EAL: VFIO support initialized 00:05:59.052 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.052 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.052 EAL: Setting up physically contiguous memory... 00:05:59.052 EAL: Setting maximum number of open files to 524288 00:05:59.052 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.052 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:59.052 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.052 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:59.052 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.052 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:59.052 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:59.052 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.052 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:59.052 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:59.052 EAL: Hugepages will be freed exactly as allocated. 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: TSC frequency is ~2700000 KHz 00:05:59.052 EAL: Main lcore 0 is ready (tid=7ff62f0e9a00;cpuset=[0]) 00:05:59.052 EAL: Trying to obtain current memory policy. 00:05:59.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.052 EAL: Restoring previous memory policy: 0 00:05:59.052 EAL: request: mp_malloc_sync 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: No shared files mode enabled, IPC is disabled 00:05:59.052 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:59.052 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.052 00:05:59.052 00:05:59.052 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.052 http://cunit.sourceforge.net/ 00:05:59.052 00:05:59.052 00:05:59.052 Suite: components_suite 00:05:59.052 Test: vtophys_malloc_test ...passed 00:05:59.052 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.053 EAL: Restoring previous memory policy: 4 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.053 EAL: request: mp_malloc_sync 00:05:59.053 EAL: No shared files mode enabled, IPC is disabled 00:05:59.053 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.053 EAL: Trying to obtain current memory policy. 00:05:59.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.312 EAL: Restoring previous memory policy: 4 00:05:59.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.312 EAL: request: mp_malloc_sync 00:05:59.312 EAL: No shared files mode enabled, IPC is disabled 00:05:59.312 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.312 EAL: request: mp_malloc_sync 00:05:59.312 EAL: No shared files mode enabled, IPC is disabled 00:05:59.312 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.312 EAL: Trying to obtain current memory policy. 00:05:59.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.312 EAL: Restoring previous memory policy: 4 00:05:59.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.312 EAL: request: mp_malloc_sync 00:05:59.312 EAL: No shared files mode enabled, IPC is disabled 00:05:59.312 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.312 EAL: request: mp_malloc_sync 00:05:59.312 EAL: No shared files mode enabled, IPC is disabled 00:05:59.312 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.312 EAL: Trying to obtain current memory policy. 00:05:59.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.570 EAL: Restoring previous memory policy: 4 00:05:59.570 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.570 EAL: request: mp_malloc_sync 00:05:59.570 EAL: No shared files mode enabled, IPC is disabled 00:05:59.570 EAL: Heap on socket 0 was expanded by 514MB 00:05:59.570 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.828 EAL: request: mp_malloc_sync 00:05:59.828 EAL: No shared files mode enabled, IPC is disabled 00:05:59.828 EAL: Heap on socket 0 was shrunk by 514MB 00:05:59.828 EAL: Trying to obtain current memory policy. 00:05:59.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.087 EAL: Restoring previous memory policy: 4 00:06:00.087 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.087 EAL: request: mp_malloc_sync 00:06:00.087 EAL: No shared files mode enabled, IPC is disabled 00:06:00.087 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.345 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.345 EAL: request: mp_malloc_sync 00:06:00.345 EAL: No shared files mode enabled, IPC is disabled 00:06:00.345 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.345 passed 00:06:00.345 00:06:00.345 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.345 suites 1 1 n/a 0 0 00:06:00.345 tests 2 2 2 0 0 00:06:00.345 asserts 497 497 497 0 n/a 00:06:00.345 00:06:00.345 Elapsed time = 1.320 seconds 00:06:00.345 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.345 EAL: request: mp_malloc_sync 00:06:00.345 EAL: No shared files mode enabled, IPC is disabled 00:06:00.345 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.345 EAL: No shared files mode enabled, IPC is disabled 00:06:00.345 EAL: No shared files mode enabled, IPC is disabled 00:06:00.345 EAL: No shared files mode enabled, IPC is disabled 00:06:00.345 00:06:00.345 real 0m1.433s 00:06:00.346 user 0m0.827s 00:06:00.346 sys 0m0.570s 00:06:00.346 17:54:07 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.346 17:54:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.346 ************************************ 00:06:00.346 END TEST env_vtophys 00:06:00.346 ************************************ 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1142 -- # return 0 00:06:00.605 17:54:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.605 17:54:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.605 ************************************ 00:06:00.605 START TEST env_pci 00:06:00.605 ************************************ 00:06:00.605 17:54:08 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:00.605 00:06:00.605 00:06:00.605 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.605 http://cunit.sourceforge.net/ 00:06:00.605 00:06:00.605 00:06:00.605 Suite: pci 00:06:00.605 Test: pci_hook ...[2024-07-23 17:54:08.058577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2214417 has claimed it 00:06:00.605 EAL: Cannot find device (10000:00:01.0) 00:06:00.605 EAL: Failed to attach device on primary process 00:06:00.605 passed 00:06:00.605 00:06:00.605 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.605 suites 1 1 n/a 0 0 00:06:00.605 tests 1 1 1 0 0 00:06:00.605 asserts 25 25 25 0 n/a 00:06:00.605 00:06:00.605 Elapsed time = 0.021 seconds 00:06:00.605 00:06:00.605 real 0m0.034s 00:06:00.605 user 0m0.009s 00:06:00.605 sys 0m0.025s 00:06:00.605 17:54:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.605 17:54:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.605 ************************************ 00:06:00.605 END TEST env_pci 00:06:00.605 ************************************ 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1142 -- # return 0 00:06:00.605 17:54:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.605 17:54:08 env -- env/env.sh@15 -- # uname 00:06:00.605 17:54:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.605 17:54:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.605 17:54:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:00.605 17:54:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.605 17:54:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.605 ************************************ 00:06:00.605 START TEST env_dpdk_post_init 00:06:00.605 ************************************ 00:06:00.605 17:54:08 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.605 EAL: Detected CPU lcores: 48 00:06:00.605 EAL: Detected NUMA nodes: 2 00:06:00.605 EAL: Detected shared linkage of DPDK 00:06:00.605 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.605 EAL: Selected IOVA mode 'VA' 00:06:00.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.605 EAL: VFIO support initialized 00:06:00.605 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.605 EAL: Using IOMMU type 1 (Type 1) 00:06:00.605 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:00.605 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:00.605 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:00.865 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:01.801 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:05.142 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:05.142 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:05.142 Starting DPDK initialization... 00:06:05.142 Starting SPDK post initialization... 00:06:05.142 SPDK NVMe probe 00:06:05.142 Attaching to 0000:88:00.0 00:06:05.142 Attached to 0000:88:00.0 00:06:05.142 Cleaning up... 00:06:05.142 00:06:05.142 real 0m4.427s 00:06:05.142 user 0m3.300s 00:06:05.142 sys 0m0.185s 00:06:05.142 17:54:12 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.142 17:54:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.142 ************************************ 00:06:05.142 END TEST env_dpdk_post_init 00:06:05.142 ************************************ 00:06:05.142 17:54:12 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.142 17:54:12 env -- env/env.sh@26 -- # uname 00:06:05.142 17:54:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.142 17:54:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.142 17:54:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.142 17:54:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.142 17:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.142 ************************************ 00:06:05.142 START TEST env_mem_callbacks 00:06:05.142 ************************************ 00:06:05.142 17:54:12 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.142 EAL: Detected CPU lcores: 48 00:06:05.142 EAL: Detected NUMA nodes: 2 00:06:05.142 EAL: Detected shared linkage of DPDK 00:06:05.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.142 EAL: Selected IOVA mode 'VA' 00:06:05.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.142 EAL: VFIO support initialized 00:06:05.142 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.142 00:06:05.142 00:06:05.142 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.142 http://cunit.sourceforge.net/ 00:06:05.142 00:06:05.142 00:06:05.142 Suite: memory 00:06:05.142 Test: test ... 00:06:05.142 register 0x200000200000 2097152 00:06:05.142 malloc 3145728 00:06:05.142 register 0x200000400000 4194304 00:06:05.142 buf 0x200000500000 len 3145728 PASSED 00:06:05.142 malloc 64 00:06:05.142 buf 0x2000004fff40 len 64 PASSED 00:06:05.142 malloc 4194304 00:06:05.142 register 0x200000800000 6291456 00:06:05.142 buf 0x200000a00000 len 4194304 PASSED 00:06:05.142 free 0x200000500000 3145728 00:06:05.142 free 0x2000004fff40 64 00:06:05.142 unregister 0x200000400000 4194304 PASSED 00:06:05.142 free 0x200000a00000 4194304 00:06:05.142 unregister 0x200000800000 6291456 PASSED 00:06:05.142 malloc 8388608 00:06:05.142 register 0x200000400000 10485760 00:06:05.142 buf 0x200000600000 len 8388608 PASSED 00:06:05.142 free 0x200000600000 8388608 00:06:05.142 unregister 0x200000400000 10485760 PASSED 00:06:05.142 passed 00:06:05.142 00:06:05.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.142 suites 1 1 n/a 0 0 00:06:05.142 tests 1 1 1 0 0 00:06:05.142 asserts 15 15 15 0 n/a 00:06:05.142 00:06:05.142 Elapsed time = 0.005 seconds 00:06:05.142 00:06:05.142 real 0m0.047s 00:06:05.142 user 0m0.011s 00:06:05.142 sys 0m0.036s 00:06:05.142 17:54:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.142 17:54:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.142 ************************************ 00:06:05.142 END TEST env_mem_callbacks 00:06:05.143 ************************************ 00:06:05.143 17:54:12 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.143 00:06:05.143 real 0m6.390s 00:06:05.143 user 0m4.410s 00:06:05.143 sys 0m1.021s 00:06:05.143 17:54:12 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.143 17:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.143 ************************************ 00:06:05.143 END TEST env 00:06:05.143 ************************************ 00:06:05.143 17:54:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.143 17:54:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.143 17:54:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.143 17:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.143 17:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.143 ************************************ 00:06:05.143 START TEST rpc 00:06:05.143 ************************************ 00:06:05.143 17:54:12 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.143 * Looking for test storage... 00:06:05.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.403 17:54:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2215257 00:06:05.403 17:54:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:05.403 17:54:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.403 17:54:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2215257 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@829 -- # '[' -z 2215257 ']' 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.403 17:54:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.403 [2024-07-23 17:54:12.828887] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:05.403 [2024-07-23 17:54:12.828983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215257 ] 00:06:05.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.403 [2024-07-23 17:54:12.890189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.403 [2024-07-23 17:54:12.979351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.403 [2024-07-23 17:54:12.979427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2215257' to capture a snapshot of events at runtime. 00:06:05.403 [2024-07-23 17:54:12.979442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.404 [2024-07-23 17:54:12.979455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.404 [2024-07-23 17:54:12.979465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2215257 for offline analysis/debug. 00:06:05.404 [2024-07-23 17:54:12.979500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.662 17:54:13 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.662 17:54:13 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.662 17:54:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.662 17:54:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.662 17:54:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.662 17:54:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.662 17:54:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.662 17:54:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.662 17:54:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.662 ************************************ 00:06:05.662 START TEST rpc_integrity 00:06:05.662 ************************************ 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.662 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.662 { 00:06:05.662 "name": "Malloc0", 00:06:05.662 "aliases": [ 00:06:05.662 "4464bbee-519b-4eaa-8a15-3b8327c9ac34" 00:06:05.662 ], 00:06:05.662 "product_name": "Malloc disk", 00:06:05.662 "block_size": 512, 00:06:05.662 "num_blocks": 16384, 00:06:05.662 "uuid": "4464bbee-519b-4eaa-8a15-3b8327c9ac34", 00:06:05.662 "assigned_rate_limits": { 00:06:05.662 "rw_ios_per_sec": 0, 00:06:05.662 "rw_mbytes_per_sec": 0, 00:06:05.662 "r_mbytes_per_sec": 0, 00:06:05.662 "w_mbytes_per_sec": 0 00:06:05.662 }, 00:06:05.662 "claimed": false, 00:06:05.662 "zoned": false, 00:06:05.662 "supported_io_types": { 00:06:05.662 "read": true, 00:06:05.662 "write": true, 00:06:05.662 "unmap": true, 00:06:05.662 "flush": true, 00:06:05.662 "reset": true, 00:06:05.662 "nvme_admin": false, 00:06:05.662 "nvme_io": false, 00:06:05.662 "nvme_io_md": false, 00:06:05.662 "write_zeroes": true, 00:06:05.662 "zcopy": true, 00:06:05.662 "get_zone_info": false, 00:06:05.662 "zone_management": false, 00:06:05.662 "zone_append": false, 00:06:05.662 "compare": false, 00:06:05.662 "compare_and_write": false, 00:06:05.662 "abort": true, 00:06:05.662 "seek_hole": false, 00:06:05.662 "seek_data": false, 00:06:05.662 "copy": true, 00:06:05.662 "nvme_iov_md": false 00:06:05.662 }, 00:06:05.662 "memory_domains": [ 00:06:05.662 { 00:06:05.662 "dma_device_id": "system", 00:06:05.662 "dma_device_type": 1 00:06:05.662 }, 00:06:05.662 { 00:06:05.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.662 "dma_device_type": 2 00:06:05.662 } 00:06:05.662 ], 00:06:05.662 "driver_specific": {} 00:06:05.662 } 00:06:05.662 ]' 00:06:05.662 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 [2024-07-23 17:54:13.338290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.921 [2024-07-23 17:54:13.338366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.921 [2024-07-23 17:54:13.338391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf2bb0 00:06:05.921 [2024-07-23 17:54:13.338406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.921 [2024-07-23 17:54:13.339722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.921 [2024-07-23 17:54:13.339744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.921 Passthru0 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.921 { 00:06:05.921 "name": "Malloc0", 00:06:05.921 "aliases": [ 00:06:05.921 "4464bbee-519b-4eaa-8a15-3b8327c9ac34" 00:06:05.921 ], 00:06:05.921 "product_name": "Malloc disk", 00:06:05.921 "block_size": 512, 00:06:05.921 "num_blocks": 16384, 00:06:05.921 "uuid": "4464bbee-519b-4eaa-8a15-3b8327c9ac34", 00:06:05.921 "assigned_rate_limits": { 00:06:05.921 "rw_ios_per_sec": 0, 00:06:05.921 "rw_mbytes_per_sec": 0, 00:06:05.921 "r_mbytes_per_sec": 0, 00:06:05.921 "w_mbytes_per_sec": 0 00:06:05.921 }, 00:06:05.921 "claimed": true, 00:06:05.921 "claim_type": "exclusive_write", 00:06:05.921 "zoned": false, 00:06:05.921 "supported_io_types": { 00:06:05.921 "read": true, 00:06:05.921 "write": true, 00:06:05.921 "unmap": true, 00:06:05.921 "flush": true, 00:06:05.921 "reset": true, 00:06:05.921 "nvme_admin": false, 00:06:05.921 "nvme_io": false, 00:06:05.921 "nvme_io_md": false, 00:06:05.921 "write_zeroes": true, 00:06:05.921 "zcopy": true, 00:06:05.921 "get_zone_info": false, 00:06:05.921 "zone_management": false, 00:06:05.921 "zone_append": false, 00:06:05.921 "compare": false, 00:06:05.921 "compare_and_write": false, 00:06:05.921 "abort": true, 00:06:05.921 "seek_hole": false, 00:06:05.921 "seek_data": false, 00:06:05.921 "copy": true, 00:06:05.921 "nvme_iov_md": false 00:06:05.921 }, 00:06:05.921 "memory_domains": [ 00:06:05.921 { 00:06:05.921 "dma_device_id": "system", 00:06:05.921 "dma_device_type": 1 00:06:05.921 }, 00:06:05.921 { 00:06:05.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.921 "dma_device_type": 2 00:06:05.921 } 00:06:05.921 ], 00:06:05.921 "driver_specific": {} 00:06:05.921 }, 00:06:05.921 { 00:06:05.921 "name": "Passthru0", 00:06:05.921 "aliases": [ 00:06:05.921 "70e97ad7-5a4f-5df8-ba2a-2eee5213a366" 00:06:05.921 ], 00:06:05.921 "product_name": "passthru", 00:06:05.921 "block_size": 512, 00:06:05.921 "num_blocks": 16384, 00:06:05.921 "uuid": "70e97ad7-5a4f-5df8-ba2a-2eee5213a366", 00:06:05.921 "assigned_rate_limits": { 00:06:05.921 "rw_ios_per_sec": 0, 00:06:05.921 "rw_mbytes_per_sec": 0, 00:06:05.921 "r_mbytes_per_sec": 0, 00:06:05.921 "w_mbytes_per_sec": 0 00:06:05.921 }, 00:06:05.921 "claimed": false, 00:06:05.921 "zoned": false, 00:06:05.921 "supported_io_types": { 00:06:05.921 "read": true, 00:06:05.921 "write": true, 00:06:05.921 "unmap": true, 00:06:05.921 "flush": true, 00:06:05.921 "reset": true, 00:06:05.921 "nvme_admin": false, 00:06:05.921 "nvme_io": false, 00:06:05.921 "nvme_io_md": false, 00:06:05.921 "write_zeroes": true, 00:06:05.921 "zcopy": true, 00:06:05.921 "get_zone_info": false, 00:06:05.921 "zone_management": false, 00:06:05.921 "zone_append": false, 00:06:05.921 "compare": false, 00:06:05.921 "compare_and_write": false, 00:06:05.921 "abort": true, 00:06:05.921 "seek_hole": false, 00:06:05.921 "seek_data": false, 00:06:05.921 "copy": true, 00:06:05.921 "nvme_iov_md": false 00:06:05.921 }, 00:06:05.921 "memory_domains": [ 00:06:05.921 { 00:06:05.921 "dma_device_id": "system", 00:06:05.921 "dma_device_type": 1 00:06:05.921 }, 00:06:05.921 { 00:06:05.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.921 "dma_device_type": 2 00:06:05.921 } 00:06:05.921 ], 00:06:05.921 "driver_specific": { 00:06:05.921 "passthru": { 00:06:05.921 "name": "Passthru0", 00:06:05.921 "base_bdev_name": "Malloc0" 00:06:05.921 } 00:06:05.921 } 00:06:05.921 } 00:06:05.921 ]' 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.921 17:54:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.921 00:06:05.921 real 0m0.215s 00:06:05.921 user 0m0.142s 00:06:05.921 sys 0m0.019s 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.921 17:54:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 ************************************ 00:06:05.921 END TEST rpc_integrity 00:06:05.921 ************************************ 00:06:05.921 17:54:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.921 17:54:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.922 17:54:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.922 17:54:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.922 17:54:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 ************************************ 00:06:05.922 START TEST rpc_plugins 00:06:05.922 ************************************ 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.922 { 00:06:05.922 "name": "Malloc1", 00:06:05.922 "aliases": [ 00:06:05.922 "2aaa83da-df23-46aa-b392-8b04dcef33ee" 00:06:05.922 ], 00:06:05.922 "product_name": "Malloc disk", 00:06:05.922 "block_size": 4096, 00:06:05.922 "num_blocks": 256, 00:06:05.922 "uuid": "2aaa83da-df23-46aa-b392-8b04dcef33ee", 00:06:05.922 "assigned_rate_limits": { 00:06:05.922 "rw_ios_per_sec": 0, 00:06:05.922 "rw_mbytes_per_sec": 0, 00:06:05.922 "r_mbytes_per_sec": 0, 00:06:05.922 "w_mbytes_per_sec": 0 00:06:05.922 }, 00:06:05.922 "claimed": false, 00:06:05.922 "zoned": false, 00:06:05.922 "supported_io_types": { 00:06:05.922 "read": true, 00:06:05.922 "write": true, 00:06:05.922 "unmap": true, 00:06:05.922 "flush": true, 00:06:05.922 "reset": true, 00:06:05.922 "nvme_admin": false, 00:06:05.922 "nvme_io": false, 00:06:05.922 "nvme_io_md": false, 00:06:05.922 "write_zeroes": true, 00:06:05.922 "zcopy": true, 00:06:05.922 "get_zone_info": false, 00:06:05.922 "zone_management": false, 00:06:05.922 "zone_append": false, 00:06:05.922 "compare": false, 00:06:05.922 "compare_and_write": false, 00:06:05.922 "abort": true, 00:06:05.922 "seek_hole": false, 00:06:05.922 "seek_data": false, 00:06:05.922 "copy": true, 00:06:05.922 "nvme_iov_md": false 00:06:05.922 }, 00:06:05.922 "memory_domains": [ 00:06:05.922 { 00:06:05.922 "dma_device_id": "system", 00:06:05.922 "dma_device_type": 1 00:06:05.922 }, 00:06:05.922 { 00:06:05.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.922 "dma_device_type": 2 00:06:05.922 } 00:06:05.922 ], 00:06:05.922 "driver_specific": {} 00:06:05.922 } 00:06:05.922 ]' 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.922 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.180 17:54:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.180 00:06:06.180 real 0m0.106s 00:06:06.180 user 0m0.072s 00:06:06.180 sys 0m0.005s 00:06:06.180 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.180 17:54:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.180 ************************************ 00:06:06.180 END TEST rpc_plugins 00:06:06.180 ************************************ 00:06:06.180 17:54:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.180 17:54:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.180 17:54:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.180 17:54:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.180 17:54:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.180 ************************************ 00:06:06.180 START TEST rpc_trace_cmd_test 00:06:06.180 ************************************ 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.180 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.180 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2215257", 00:06:06.180 "tpoint_group_mask": "0x8", 00:06:06.180 "iscsi_conn": { 00:06:06.180 "mask": "0x2", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "scsi": { 00:06:06.180 "mask": "0x4", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "bdev": { 00:06:06.180 "mask": "0x8", 00:06:06.180 "tpoint_mask": "0xffffffffffffffff" 00:06:06.180 }, 00:06:06.180 "nvmf_rdma": { 00:06:06.180 "mask": "0x10", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "nvmf_tcp": { 00:06:06.180 "mask": "0x20", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "ftl": { 00:06:06.180 "mask": "0x40", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "blobfs": { 00:06:06.180 "mask": "0x80", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "dsa": { 00:06:06.180 "mask": "0x200", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "thread": { 00:06:06.180 "mask": "0x400", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "nvme_pcie": { 00:06:06.180 "mask": "0x800", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "iaa": { 00:06:06.180 "mask": "0x1000", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "nvme_tcp": { 00:06:06.180 "mask": "0x2000", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "bdev_nvme": { 00:06:06.180 "mask": "0x4000", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 }, 00:06:06.180 "sock": { 00:06:06.180 "mask": "0x8000", 00:06:06.180 "tpoint_mask": "0x0" 00:06:06.180 } 00:06:06.181 }' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:06.181 00:06:06.181 real 0m0.184s 00:06:06.181 user 0m0.160s 00:06:06.181 sys 0m0.016s 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.181 17:54:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.181 ************************************ 00:06:06.181 END TEST rpc_trace_cmd_test 00:06:06.181 ************************************ 00:06:06.439 17:54:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.439 17:54:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:06.439 17:54:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:06.439 17:54:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:06.439 17:54:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.439 17:54:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.439 17:54:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.439 ************************************ 00:06:06.439 START TEST rpc_daemon_integrity 00:06:06.439 ************************************ 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.439 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.439 { 00:06:06.439 "name": "Malloc2", 00:06:06.439 "aliases": [ 00:06:06.439 "4049cb36-a007-44c8-8c24-81d84015a15f" 00:06:06.439 ], 00:06:06.439 "product_name": "Malloc disk", 00:06:06.439 "block_size": 512, 00:06:06.440 "num_blocks": 16384, 00:06:06.440 "uuid": "4049cb36-a007-44c8-8c24-81d84015a15f", 00:06:06.440 "assigned_rate_limits": { 00:06:06.440 "rw_ios_per_sec": 0, 00:06:06.440 "rw_mbytes_per_sec": 0, 00:06:06.440 "r_mbytes_per_sec": 0, 00:06:06.440 "w_mbytes_per_sec": 0 00:06:06.440 }, 00:06:06.440 "claimed": false, 00:06:06.440 "zoned": false, 00:06:06.440 "supported_io_types": { 00:06:06.440 "read": true, 00:06:06.440 "write": true, 00:06:06.440 "unmap": true, 00:06:06.440 "flush": true, 00:06:06.440 "reset": true, 00:06:06.440 "nvme_admin": false, 00:06:06.440 "nvme_io": false, 00:06:06.440 "nvme_io_md": false, 00:06:06.440 "write_zeroes": true, 00:06:06.440 "zcopy": true, 00:06:06.440 "get_zone_info": false, 00:06:06.440 "zone_management": false, 00:06:06.440 "zone_append": false, 00:06:06.440 "compare": false, 00:06:06.440 "compare_and_write": false, 00:06:06.440 "abort": true, 00:06:06.440 "seek_hole": false, 00:06:06.440 "seek_data": false, 00:06:06.440 "copy": true, 00:06:06.440 "nvme_iov_md": false 00:06:06.440 }, 00:06:06.440 "memory_domains": [ 00:06:06.440 { 00:06:06.440 "dma_device_id": "system", 00:06:06.440 "dma_device_type": 1 00:06:06.440 }, 00:06:06.440 { 00:06:06.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.440 "dma_device_type": 2 00:06:06.440 } 00:06:06.440 ], 00:06:06.440 "driver_specific": {} 00:06:06.440 } 00:06:06.440 ]' 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.440 [2024-07-23 17:54:13.976120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:06.440 [2024-07-23 17:54:13.976165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.440 [2024-07-23 17:54:13.976190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf36f0 00:06:06.440 [2024-07-23 17:54:13.976204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.440 [2024-07-23 17:54:13.977421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.440 [2024-07-23 17:54:13.977447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.440 Passthru0 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.440 { 00:06:06.440 "name": "Malloc2", 00:06:06.440 "aliases": [ 00:06:06.440 "4049cb36-a007-44c8-8c24-81d84015a15f" 00:06:06.440 ], 00:06:06.440 "product_name": "Malloc disk", 00:06:06.440 "block_size": 512, 00:06:06.440 "num_blocks": 16384, 00:06:06.440 "uuid": "4049cb36-a007-44c8-8c24-81d84015a15f", 00:06:06.440 "assigned_rate_limits": { 00:06:06.440 "rw_ios_per_sec": 0, 00:06:06.440 "rw_mbytes_per_sec": 0, 00:06:06.440 "r_mbytes_per_sec": 0, 00:06:06.440 "w_mbytes_per_sec": 0 00:06:06.440 }, 00:06:06.440 "claimed": true, 00:06:06.440 "claim_type": "exclusive_write", 00:06:06.440 "zoned": false, 00:06:06.440 "supported_io_types": { 00:06:06.440 "read": true, 00:06:06.440 "write": true, 00:06:06.440 "unmap": true, 00:06:06.440 "flush": true, 00:06:06.440 "reset": true, 00:06:06.440 "nvme_admin": false, 00:06:06.440 "nvme_io": false, 00:06:06.440 "nvme_io_md": false, 00:06:06.440 "write_zeroes": true, 00:06:06.440 "zcopy": true, 00:06:06.440 "get_zone_info": false, 00:06:06.440 "zone_management": false, 00:06:06.440 "zone_append": false, 00:06:06.440 "compare": false, 00:06:06.440 "compare_and_write": false, 00:06:06.440 "abort": true, 00:06:06.440 "seek_hole": false, 00:06:06.440 "seek_data": false, 00:06:06.440 "copy": true, 00:06:06.440 "nvme_iov_md": false 00:06:06.440 }, 00:06:06.440 "memory_domains": [ 00:06:06.440 { 00:06:06.440 "dma_device_id": "system", 00:06:06.440 "dma_device_type": 1 00:06:06.440 }, 00:06:06.440 { 00:06:06.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.440 "dma_device_type": 2 00:06:06.440 } 00:06:06.440 ], 00:06:06.440 "driver_specific": {} 00:06:06.440 }, 00:06:06.440 { 00:06:06.440 "name": "Passthru0", 00:06:06.440 "aliases": [ 00:06:06.440 "bb23b2a0-9a93-5042-9791-16f231c81b50" 00:06:06.440 ], 00:06:06.440 "product_name": "passthru", 00:06:06.440 "block_size": 512, 00:06:06.440 "num_blocks": 16384, 00:06:06.440 "uuid": "bb23b2a0-9a93-5042-9791-16f231c81b50", 00:06:06.440 "assigned_rate_limits": { 00:06:06.440 "rw_ios_per_sec": 0, 00:06:06.440 "rw_mbytes_per_sec": 0, 00:06:06.440 "r_mbytes_per_sec": 0, 00:06:06.440 "w_mbytes_per_sec": 0 00:06:06.440 }, 00:06:06.440 "claimed": false, 00:06:06.440 "zoned": false, 00:06:06.440 "supported_io_types": { 00:06:06.440 "read": true, 00:06:06.440 "write": true, 00:06:06.440 "unmap": true, 00:06:06.440 "flush": true, 00:06:06.440 "reset": true, 00:06:06.440 "nvme_admin": false, 00:06:06.440 "nvme_io": false, 00:06:06.440 "nvme_io_md": false, 00:06:06.440 "write_zeroes": true, 00:06:06.440 "zcopy": true, 00:06:06.440 "get_zone_info": false, 00:06:06.440 "zone_management": false, 00:06:06.440 "zone_append": false, 00:06:06.440 "compare": false, 00:06:06.440 "compare_and_write": false, 00:06:06.440 "abort": true, 00:06:06.440 "seek_hole": false, 00:06:06.440 "seek_data": false, 00:06:06.440 "copy": true, 00:06:06.440 "nvme_iov_md": false 00:06:06.440 }, 00:06:06.440 "memory_domains": [ 00:06:06.440 { 00:06:06.440 "dma_device_id": "system", 00:06:06.440 "dma_device_type": 1 00:06:06.440 }, 00:06:06.440 { 00:06:06.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.440 "dma_device_type": 2 00:06:06.440 } 00:06:06.440 ], 00:06:06.440 "driver_specific": { 00:06:06.440 "passthru": { 00:06:06.440 "name": "Passthru0", 00:06:06.440 "base_bdev_name": "Malloc2" 00:06:06.440 } 00:06:06.440 } 00:06:06.440 } 00:06:06.440 ]' 00:06:06.440 17:54:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.440 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.441 00:06:06.441 real 0m0.211s 00:06:06.441 user 0m0.139s 00:06:06.441 sys 0m0.020s 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.441 17:54:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.441 ************************************ 00:06:06.441 END TEST rpc_daemon_integrity 00:06:06.441 ************************************ 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.699 17:54:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.699 17:54:14 rpc -- rpc/rpc.sh@84 -- # killprocess 2215257 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@948 -- # '[' -z 2215257 ']' 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@952 -- # kill -0 2215257 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@953 -- # uname 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2215257 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2215257' 00:06:06.699 killing process with pid 2215257 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@967 -- # kill 2215257 00:06:06.699 17:54:14 rpc -- common/autotest_common.sh@972 -- # wait 2215257 00:06:06.957 00:06:06.957 real 0m1.796s 00:06:06.957 user 0m2.259s 00:06:06.957 sys 0m0.557s 00:06:06.957 17:54:14 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.957 17:54:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.957 ************************************ 00:06:06.957 END TEST rpc 00:06:06.957 ************************************ 00:06:06.957 17:54:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.957 17:54:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.957 17:54:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.957 17:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.957 17:54:14 -- common/autotest_common.sh@10 -- # set +x 00:06:06.957 ************************************ 00:06:06.957 START TEST skip_rpc 00:06:06.957 ************************************ 00:06:06.957 17:54:14 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:07.215 * Looking for test storage... 00:06:07.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:07.215 17:54:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:07.215 17:54:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:07.215 17:54:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.215 17:54:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.215 17:54:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.215 17:54:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.216 ************************************ 00:06:07.216 START TEST skip_rpc 00:06:07.216 ************************************ 00:06:07.216 17:54:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:07.216 17:54:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2215696 00:06:07.216 17:54:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.216 17:54:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.216 17:54:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:07.216 [2024-07-23 17:54:14.704610] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:07.216 [2024-07-23 17:54:14.704703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215696 ] 00:06:07.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.216 [2024-07-23 17:54:14.762467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.216 [2024-07-23 17:54:14.850018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2215696 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2215696 ']' 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2215696 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2215696 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2215696' 00:06:12.480 killing process with pid 2215696 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2215696 00:06:12.480 17:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2215696 00:06:12.480 00:06:12.480 real 0m5.434s 00:06:12.480 user 0m5.127s 00:06:12.480 sys 0m0.309s 00:06:12.480 17:54:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.480 17:54:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.480 ************************************ 00:06:12.480 END TEST skip_rpc 00:06:12.480 ************************************ 00:06:12.480 17:54:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:12.480 17:54:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.480 17:54:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.480 17:54:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.480 17:54:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.480 ************************************ 00:06:12.480 START TEST skip_rpc_with_json 00:06:12.480 ************************************ 00:06:12.480 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:12.480 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.480 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2216384 00:06:12.480 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.480 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2216384 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2216384 ']' 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.481 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 [2024-07-23 17:54:20.188139] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:12.739 [2024-07-23 17:54:20.188222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216384 ] 00:06:12.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.739 [2024-07-23 17:54:20.247387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.739 [2024-07-23 17:54:20.334461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.997 [2024-07-23 17:54:20.570729] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.997 request: 00:06:12.997 { 00:06:12.997 "trtype": "tcp", 00:06:12.997 "method": "nvmf_get_transports", 00:06:12.997 "req_id": 1 00:06:12.997 } 00:06:12.997 Got JSON-RPC error response 00:06:12.997 response: 00:06:12.997 { 00:06:12.997 "code": -19, 00:06:12.997 "message": "No such device" 00:06:12.997 } 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.997 [2024-07-23 17:54:20.578830] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.997 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.256 { 00:06:13.256 "subsystems": [ 00:06:13.256 { 00:06:13.256 "subsystem": "vfio_user_target", 00:06:13.256 "config": null 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "keyring", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "iobuf", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "iobuf_set_options", 00:06:13.256 "params": { 00:06:13.256 "small_pool_count": 8192, 00:06:13.256 "large_pool_count": 1024, 00:06:13.256 "small_bufsize": 8192, 00:06:13.256 "large_bufsize": 135168 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "sock", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "sock_set_default_impl", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "posix" 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "sock_impl_set_options", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "ssl", 00:06:13.256 "recv_buf_size": 4096, 00:06:13.256 "send_buf_size": 4096, 00:06:13.256 "enable_recv_pipe": true, 00:06:13.256 "enable_quickack": false, 00:06:13.256 "enable_placement_id": 0, 00:06:13.256 "enable_zerocopy_send_server": true, 00:06:13.256 "enable_zerocopy_send_client": false, 00:06:13.256 "zerocopy_threshold": 0, 00:06:13.256 "tls_version": 0, 00:06:13.256 "enable_ktls": false 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "sock_impl_set_options", 00:06:13.256 "params": { 00:06:13.256 "impl_name": "posix", 00:06:13.256 "recv_buf_size": 2097152, 00:06:13.256 "send_buf_size": 2097152, 00:06:13.256 "enable_recv_pipe": true, 00:06:13.256 "enable_quickack": false, 00:06:13.256 "enable_placement_id": 0, 00:06:13.256 "enable_zerocopy_send_server": true, 00:06:13.256 "enable_zerocopy_send_client": false, 00:06:13.256 "zerocopy_threshold": 0, 00:06:13.256 "tls_version": 0, 00:06:13.256 "enable_ktls": false 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "vmd", 00:06:13.256 "config": [] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "accel", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "accel_set_options", 00:06:13.256 "params": { 00:06:13.256 "small_cache_size": 128, 00:06:13.256 "large_cache_size": 16, 00:06:13.256 "task_count": 2048, 00:06:13.256 "sequence_count": 2048, 00:06:13.256 "buf_count": 2048 00:06:13.256 } 00:06:13.256 } 00:06:13.256 ] 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "subsystem": "bdev", 00:06:13.256 "config": [ 00:06:13.256 { 00:06:13.256 "method": "bdev_set_options", 00:06:13.256 "params": { 00:06:13.256 "bdev_io_pool_size": 65535, 00:06:13.256 "bdev_io_cache_size": 256, 00:06:13.256 "bdev_auto_examine": true, 00:06:13.256 "iobuf_small_cache_size": 128, 00:06:13.256 "iobuf_large_cache_size": 16 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_raid_set_options", 00:06:13.256 "params": { 00:06:13.256 "process_window_size_kb": 1024, 00:06:13.256 "process_max_bandwidth_mb_sec": 0 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_iscsi_set_options", 00:06:13.256 "params": { 00:06:13.256 "timeout_sec": 30 00:06:13.256 } 00:06:13.256 }, 00:06:13.256 { 00:06:13.256 "method": "bdev_nvme_set_options", 00:06:13.256 "params": { 00:06:13.256 "action_on_timeout": "none", 00:06:13.256 "timeout_us": 0, 00:06:13.256 "timeout_admin_us": 0, 00:06:13.256 "keep_alive_timeout_ms": 10000, 00:06:13.256 "arbitration_burst": 0, 00:06:13.256 "low_priority_weight": 0, 00:06:13.256 "medium_priority_weight": 0, 00:06:13.256 "high_priority_weight": 0, 00:06:13.256 "nvme_adminq_poll_period_us": 10000, 00:06:13.256 "nvme_ioq_poll_period_us": 0, 00:06:13.256 "io_queue_requests": 0, 00:06:13.256 "delay_cmd_submit": true, 00:06:13.256 "transport_retry_count": 4, 00:06:13.256 "bdev_retry_count": 3, 00:06:13.256 "transport_ack_timeout": 0, 00:06:13.256 "ctrlr_loss_timeout_sec": 0, 00:06:13.256 "reconnect_delay_sec": 0, 00:06:13.256 "fast_io_fail_timeout_sec": 0, 00:06:13.256 "disable_auto_failback": false, 00:06:13.256 "generate_uuids": false, 00:06:13.256 "transport_tos": 0, 00:06:13.256 "nvme_error_stat": false, 00:06:13.256 "rdma_srq_size": 0, 00:06:13.256 "io_path_stat": false, 00:06:13.256 "allow_accel_sequence": false, 00:06:13.257 "rdma_max_cq_size": 0, 00:06:13.257 "rdma_cm_event_timeout_ms": 0, 00:06:13.257 "dhchap_digests": [ 00:06:13.257 "sha256", 00:06:13.257 "sha384", 00:06:13.257 "sha512" 00:06:13.257 ], 00:06:13.257 "dhchap_dhgroups": [ 00:06:13.257 "null", 00:06:13.257 "ffdhe2048", 00:06:13.257 "ffdhe3072", 00:06:13.257 "ffdhe4096", 00:06:13.257 "ffdhe6144", 00:06:13.257 "ffdhe8192" 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "bdev_nvme_set_hotplug", 00:06:13.257 "params": { 00:06:13.257 "period_us": 100000, 00:06:13.257 "enable": false 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "bdev_wait_for_examine" 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "scsi", 00:06:13.257 "config": null 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "scheduler", 00:06:13.257 "config": [ 00:06:13.257 { 00:06:13.257 "method": "framework_set_scheduler", 00:06:13.257 "params": { 00:06:13.257 "name": "static" 00:06:13.257 } 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "vhost_scsi", 00:06:13.257 "config": [] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "vhost_blk", 00:06:13.257 "config": [] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "ublk", 00:06:13.257 "config": [] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "nbd", 00:06:13.257 "config": [] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "nvmf", 00:06:13.257 "config": [ 00:06:13.257 { 00:06:13.257 "method": "nvmf_set_config", 00:06:13.257 "params": { 00:06:13.257 "discovery_filter": "match_any", 00:06:13.257 "admin_cmd_passthru": { 00:06:13.257 "identify_ctrlr": false 00:06:13.257 } 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_set_max_subsystems", 00:06:13.257 "params": { 00:06:13.257 "max_subsystems": 1024 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_set_crdt", 00:06:13.257 "params": { 00:06:13.257 "crdt1": 0, 00:06:13.257 "crdt2": 0, 00:06:13.257 "crdt3": 0 00:06:13.257 } 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "method": "nvmf_create_transport", 00:06:13.257 "params": { 00:06:13.257 "trtype": "TCP", 00:06:13.257 "max_queue_depth": 128, 00:06:13.257 "max_io_qpairs_per_ctrlr": 127, 00:06:13.257 "in_capsule_data_size": 4096, 00:06:13.257 "max_io_size": 131072, 00:06:13.257 "io_unit_size": 131072, 00:06:13.257 "max_aq_depth": 128, 00:06:13.257 "num_shared_buffers": 511, 00:06:13.257 "buf_cache_size": 4294967295, 00:06:13.257 "dif_insert_or_strip": false, 00:06:13.257 "zcopy": false, 00:06:13.257 "c2h_success": true, 00:06:13.257 "sock_priority": 0, 00:06:13.257 "abort_timeout_sec": 1, 00:06:13.257 "ack_timeout": 0, 00:06:13.257 "data_wr_pool_size": 0 00:06:13.257 } 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 }, 00:06:13.257 { 00:06:13.257 "subsystem": "iscsi", 00:06:13.257 "config": [ 00:06:13.257 { 00:06:13.257 "method": "iscsi_set_options", 00:06:13.257 "params": { 00:06:13.257 "node_base": "iqn.2016-06.io.spdk", 00:06:13.257 "max_sessions": 128, 00:06:13.257 "max_connections_per_session": 2, 00:06:13.257 "max_queue_depth": 64, 00:06:13.257 "default_time2wait": 2, 00:06:13.257 "default_time2retain": 20, 00:06:13.257 "first_burst_length": 8192, 00:06:13.257 "immediate_data": true, 00:06:13.257 "allow_duplicated_isid": false, 00:06:13.257 "error_recovery_level": 0, 00:06:13.257 "nop_timeout": 60, 00:06:13.257 "nop_in_interval": 30, 00:06:13.257 "disable_chap": false, 00:06:13.257 "require_chap": false, 00:06:13.257 "mutual_chap": false, 00:06:13.257 "chap_group": 0, 00:06:13.257 "max_large_datain_per_connection": 64, 00:06:13.257 "max_r2t_per_connection": 4, 00:06:13.257 "pdu_pool_size": 36864, 00:06:13.257 "immediate_data_pool_size": 16384, 00:06:13.257 "data_out_pool_size": 2048 00:06:13.257 } 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 ] 00:06:13.257 } 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2216384 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2216384 ']' 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2216384 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216384 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216384' 00:06:13.257 killing process with pid 2216384 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2216384 00:06:13.257 17:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2216384 00:06:13.514 17:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2216520 00:06:13.514 17:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.514 17:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2216520 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2216520 ']' 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2216520 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216520 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216520' 00:06:18.774 killing process with pid 2216520 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2216520 00:06:18.774 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2216520 00:06:19.031 17:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.031 17:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:19.031 00:06:19.031 real 0m6.423s 00:06:19.031 user 0m6.074s 00:06:19.031 sys 0m0.636s 00:06:19.031 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.031 17:54:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.031 ************************************ 00:06:19.031 END TEST skip_rpc_with_json 00:06:19.031 ************************************ 00:06:19.031 17:54:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:19.032 17:54:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.032 17:54:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.032 17:54:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.032 17:54:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.032 ************************************ 00:06:19.032 START TEST skip_rpc_with_delay 00:06:19.032 ************************************ 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.032 [2024-07-23 17:54:26.664853] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.032 [2024-07-23 17:54:26.664953] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.032 00:06:19.032 real 0m0.069s 00:06:19.032 user 0m0.049s 00:06:19.032 sys 0m0.020s 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.032 17:54:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.032 ************************************ 00:06:19.032 END TEST skip_rpc_with_delay 00:06:19.032 ************************************ 00:06:19.290 17:54:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:19.290 17:54:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.290 17:54:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.290 17:54:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.290 17:54:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.290 17:54:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.290 17:54:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.290 ************************************ 00:06:19.290 START TEST exit_on_failed_rpc_init 00:06:19.290 ************************************ 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2217237 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2217237 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2217237 ']' 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.290 17:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.290 [2024-07-23 17:54:26.784419] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:19.290 [2024-07-23 17:54:26.784508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217237 ] 00:06:19.290 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.290 [2024-07-23 17:54:26.842665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.290 [2024-07-23 17:54:26.931656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.548 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.806 [2024-07-23 17:54:27.237801] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:19.806 [2024-07-23 17:54:27.237883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217254 ] 00:06:19.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.806 [2024-07-23 17:54:27.294288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.806 [2024-07-23 17:54:27.381689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.806 [2024-07-23 17:54:27.381799] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.806 [2024-07-23 17:54:27.381817] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.806 [2024-07-23 17:54:27.381828] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2217237 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2217237 ']' 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2217237 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217237 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217237' 00:06:20.063 killing process with pid 2217237 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2217237 00:06:20.063 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2217237 00:06:20.321 00:06:20.321 real 0m1.133s 00:06:20.321 user 0m1.232s 00:06:20.321 sys 0m0.432s 00:06:20.321 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.321 17:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.321 ************************************ 00:06:20.321 END TEST exit_on_failed_rpc_init 00:06:20.321 ************************************ 00:06:20.321 17:54:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.321 17:54:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.321 00:06:20.321 real 0m13.318s 00:06:20.321 user 0m12.585s 00:06:20.321 sys 0m1.569s 00:06:20.321 17:54:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.321 17:54:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.321 ************************************ 00:06:20.321 END TEST skip_rpc 00:06:20.321 ************************************ 00:06:20.321 17:54:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.321 17:54:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.321 17:54:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.321 17:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.321 17:54:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.321 ************************************ 00:06:20.321 START TEST rpc_client 00:06:20.321 ************************************ 00:06:20.321 17:54:27 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.580 * Looking for test storage... 00:06:20.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.580 17:54:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.580 OK 00:06:20.580 17:54:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.580 00:06:20.580 real 0m0.070s 00:06:20.580 user 0m0.028s 00:06:20.580 sys 0m0.048s 00:06:20.580 17:54:28 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.580 17:54:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.580 ************************************ 00:06:20.580 END TEST rpc_client 00:06:20.580 ************************************ 00:06:20.580 17:54:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.580 17:54:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.580 17:54:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.580 17:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.580 17:54:28 -- common/autotest_common.sh@10 -- # set +x 00:06:20.580 ************************************ 00:06:20.580 START TEST json_config 00:06:20.580 ************************************ 00:06:20.580 17:54:28 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.580 17:54:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.580 17:54:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.580 17:54:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.580 17:54:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.580 17:54:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.580 17:54:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.580 17:54:28 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.580 17:54:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@47 -- # : 0 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.580 17:54:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.580 17:54:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:20.581 INFO: JSON configuration test init 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.581 17:54:28 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.581 17:54:28 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.581 17:54:28 json_config -- json_config/common.sh@10 -- # shift 00:06:20.581 17:54:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.581 17:54:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.581 17:54:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.581 17:54:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.581 17:54:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.581 17:54:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2217491 00:06:20.581 17:54:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.581 17:54:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.581 Waiting for target to run... 00:06:20.581 17:54:28 json_config -- json_config/common.sh@25 -- # waitforlisten 2217491 /var/tmp/spdk_tgt.sock 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 2217491 ']' 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.581 17:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.581 [2024-07-23 17:54:28.156424] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:20.581 [2024-07-23 17:54:28.156509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217491 ] 00:06:20.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.839 [2024-07-23 17:54:28.495737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.097 [2024-07-23 17:54:28.552573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:21.663 17:54:29 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.663 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.663 17:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.663 17:54:29 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:21.663 17:54:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:24.946 17:54:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@51 -- # sort 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.946 17:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:24.946 17:54:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.946 17:54:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.204 MallocForNvmf0 00:06:25.204 17:54:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.204 17:54:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.462 MallocForNvmf1 00:06:25.462 17:54:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.462 17:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.720 [2024-07-23 17:54:33.245970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.720 17:54:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.720 17:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.977 17:54:33 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.977 17:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.234 17:54:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.234 17:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.492 17:54:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.492 17:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.750 [2024-07-23 17:54:34.208983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.750 17:54:34 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:26.750 17:54:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.750 17:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.750 17:54:34 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:26.750 17:54:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.750 17:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.750 17:54:34 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:26.750 17:54:34 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.750 17:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.007 MallocBdevForConfigChangeCheck 00:06:27.008 17:54:34 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:27.008 17:54:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.008 17:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.008 17:54:34 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:27.008 17:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.270 17:54:34 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:27.270 INFO: shutting down applications... 00:06:27.270 17:54:34 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:27.270 17:54:34 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:27.270 17:54:34 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:27.270 17:54:34 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:29.211 Calling clear_iscsi_subsystem 00:06:29.211 Calling clear_nvmf_subsystem 00:06:29.211 Calling clear_nbd_subsystem 00:06:29.211 Calling clear_ublk_subsystem 00:06:29.211 Calling clear_vhost_blk_subsystem 00:06:29.211 Calling clear_vhost_scsi_subsystem 00:06:29.211 Calling clear_bdev_subsystem 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:29.211 17:54:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:29.469 17:54:36 json_config -- json_config/json_config.sh@349 -- # break 00:06:29.469 17:54:36 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:29.469 17:54:36 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:29.469 17:54:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:29.469 17:54:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.469 17:54:36 json_config -- json_config/common.sh@35 -- # [[ -n 2217491 ]] 00:06:29.469 17:54:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2217491 00:06:29.469 17:54:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.469 17:54:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.469 17:54:36 json_config -- json_config/common.sh@41 -- # kill -0 2217491 00:06:29.469 17:54:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.037 17:54:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.037 17:54:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.037 17:54:37 json_config -- json_config/common.sh@41 -- # kill -0 2217491 00:06:30.037 17:54:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.037 17:54:37 json_config -- json_config/common.sh@43 -- # break 00:06:30.037 17:54:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.037 17:54:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.037 SPDK target shutdown done 00:06:30.037 17:54:37 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:30.037 INFO: relaunching applications... 00:06:30.037 17:54:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.037 17:54:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.037 17:54:37 json_config -- json_config/common.sh@10 -- # shift 00:06:30.037 17:54:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.037 17:54:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.037 17:54:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.037 17:54:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.037 17:54:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.037 17:54:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2218691 00:06:30.037 17:54:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.037 17:54:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.037 Waiting for target to run... 00:06:30.037 17:54:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2218691 /var/tmp/spdk_tgt.sock 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 2218691 ']' 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.037 17:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.037 [2024-07-23 17:54:37.523538] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:30.037 [2024-07-23 17:54:37.523644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218691 ] 00:06:30.037 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.605 [2024-07-23 17:54:38.053237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.605 [2024-07-23 17:54:38.126473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.885 [2024-07-23 17:54:41.147338] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.885 [2024-07-23 17:54:41.179814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.450 17:54:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.450 17:54:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:34.450 17:54:41 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.450 00:06:34.450 17:54:41 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:34.450 17:54:41 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.450 INFO: Checking if target configuration is the same... 00:06:34.450 17:54:41 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.450 17:54:41 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:34.450 17:54:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.450 + '[' 2 -ne 2 ']' 00:06:34.450 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.450 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.450 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.450 +++ basename /dev/fd/62 00:06:34.450 ++ mktemp /tmp/62.XXX 00:06:34.450 + tmp_file_1=/tmp/62.jAx 00:06:34.450 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.450 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.450 + tmp_file_2=/tmp/spdk_tgt_config.json.Mut 00:06:34.450 + ret=0 00:06:34.450 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.708 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.708 + diff -u /tmp/62.jAx /tmp/spdk_tgt_config.json.Mut 00:06:34.708 + echo 'INFO: JSON config files are the same' 00:06:34.708 INFO: JSON config files are the same 00:06:34.708 + rm /tmp/62.jAx /tmp/spdk_tgt_config.json.Mut 00:06:34.708 + exit 0 00:06:34.708 17:54:42 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:34.708 17:54:42 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.708 INFO: changing configuration and checking if this can be detected... 00:06:34.965 17:54:42 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.965 17:54:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.965 17:54:42 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.965 17:54:42 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:34.965 17:54:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.965 + '[' 2 -ne 2 ']' 00:06:34.965 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.965 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.965 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.965 +++ basename /dev/fd/62 00:06:34.965 ++ mktemp /tmp/62.XXX 00:06:34.965 + tmp_file_1=/tmp/62.ovM 00:06:34.965 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.223 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.223 + tmp_file_2=/tmp/spdk_tgt_config.json.k2d 00:06:35.223 + ret=0 00:06:35.223 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.481 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.481 + diff -u /tmp/62.ovM /tmp/spdk_tgt_config.json.k2d 00:06:35.481 + ret=1 00:06:35.481 + echo '=== Start of file: /tmp/62.ovM ===' 00:06:35.481 + cat /tmp/62.ovM 00:06:35.481 + echo '=== End of file: /tmp/62.ovM ===' 00:06:35.481 + echo '' 00:06:35.481 + echo '=== Start of file: /tmp/spdk_tgt_config.json.k2d ===' 00:06:35.481 + cat /tmp/spdk_tgt_config.json.k2d 00:06:35.481 + echo '=== End of file: /tmp/spdk_tgt_config.json.k2d ===' 00:06:35.481 + echo '' 00:06:35.481 + rm /tmp/62.ovM /tmp/spdk_tgt_config.json.k2d 00:06:35.481 + exit 1 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:35.481 INFO: configuration change detected. 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@321 -- # [[ -n 2218691 ]] 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.481 17:54:43 json_config -- json_config/json_config.sh@327 -- # killprocess 2218691 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@948 -- # '[' -z 2218691 ']' 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@952 -- # kill -0 2218691 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@953 -- # uname 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218691 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218691' 00:06:35.481 killing process with pid 2218691 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@967 -- # kill 2218691 00:06:35.481 17:54:43 json_config -- common/autotest_common.sh@972 -- # wait 2218691 00:06:37.377 17:54:44 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.377 17:54:44 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:37.377 17:54:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.377 17:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.377 17:54:44 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:37.377 17:54:44 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:37.377 INFO: Success 00:06:37.377 00:06:37.377 real 0m16.673s 00:06:37.377 user 0m18.533s 00:06:37.377 sys 0m2.038s 00:06:37.377 17:54:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.377 17:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.377 ************************************ 00:06:37.377 END TEST json_config 00:06:37.377 ************************************ 00:06:37.377 17:54:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.377 17:54:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.377 17:54:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.377 17:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.377 17:54:44 -- common/autotest_common.sh@10 -- # set +x 00:06:37.377 ************************************ 00:06:37.377 START TEST json_config_extra_key 00:06:37.377 ************************************ 00:06:37.377 17:54:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.377 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.377 17:54:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:37.377 17:54:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.377 17:54:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.378 17:54:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.378 17:54:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.378 17:54:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.378 17:54:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.378 17:54:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.378 17:54:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.378 17:54:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:37.378 17:54:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.378 17:54:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:37.378 INFO: launching applications... 00:06:37.378 17:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2219728 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.378 Waiting for target to run... 00:06:37.378 17:54:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2219728 /var/tmp/spdk_tgt.sock 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2219728 ']' 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.378 17:54:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.378 [2024-07-23 17:54:44.883499] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:37.378 [2024-07-23 17:54:44.883596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219728 ] 00:06:37.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.945 [2024-07-23 17:54:45.379044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.945 [2024-07-23 17:54:45.457809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.202 17:54:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.202 17:54:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:38.202 17:54:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:38.202 00:06:38.202 17:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:38.202 INFO: shutting down applications... 00:06:38.202 17:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:38.202 17:54:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2219728 ]] 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2219728 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2219728 00:06:38.203 17:54:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2219728 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.768 17:54:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.768 SPDK target shutdown done 00:06:38.768 17:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.768 Success 00:06:38.768 00:06:38.768 real 0m1.544s 00:06:38.768 user 0m1.328s 00:06:38.768 sys 0m0.600s 00:06:38.768 17:54:46 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.768 17:54:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 ************************************ 00:06:38.768 END TEST json_config_extra_key 00:06:38.768 ************************************ 00:06:38.768 17:54:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.768 17:54:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.768 17:54:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.768 17:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.768 17:54:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 ************************************ 00:06:38.768 START TEST alias_rpc 00:06:38.768 ************************************ 00:06:38.768 17:54:46 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.768 * Looking for test storage... 00:06:38.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:38.768 17:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.026 17:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2219917 00:06:39.026 17:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.026 17:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2219917 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2219917 ']' 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.026 17:54:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.026 [2024-07-23 17:54:46.479270] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:39.026 [2024-07-23 17:54:46.479387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219917 ] 00:06:39.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.026 [2024-07-23 17:54:46.538956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.026 [2024-07-23 17:54:46.633660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.283 17:54:46 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.283 17:54:46 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.283 17:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:39.541 17:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2219917 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2219917 ']' 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2219917 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2219917 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2219917' 00:06:39.541 killing process with pid 2219917 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@967 -- # kill 2219917 00:06:39.541 17:54:47 alias_rpc -- common/autotest_common.sh@972 -- # wait 2219917 00:06:40.106 00:06:40.106 real 0m1.193s 00:06:40.106 user 0m1.292s 00:06:40.106 sys 0m0.404s 00:06:40.107 17:54:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.107 17:54:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.107 ************************************ 00:06:40.107 END TEST alias_rpc 00:06:40.107 ************************************ 00:06:40.107 17:54:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.107 17:54:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:40.107 17:54:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.107 17:54:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.107 17:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.107 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.107 ************************************ 00:06:40.107 START TEST spdkcli_tcp 00:06:40.107 ************************************ 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:40.107 * Looking for test storage... 00:06:40.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2220106 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.107 17:54:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2220106 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2220106 ']' 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.107 17:54:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.107 [2024-07-23 17:54:47.729220] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:40.107 [2024-07-23 17:54:47.729330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220106 ] 00:06:40.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.365 [2024-07-23 17:54:47.792448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.365 [2024-07-23 17:54:47.876988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.365 [2024-07-23 17:54:47.876992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.622 17:54:48 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.622 17:54:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:40.622 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2220232 00:06:40.622 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.622 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.881 [ 00:06:40.881 "bdev_malloc_delete", 00:06:40.881 "bdev_malloc_create", 00:06:40.881 "bdev_null_resize", 00:06:40.881 "bdev_null_delete", 00:06:40.881 "bdev_null_create", 00:06:40.881 "bdev_nvme_cuse_unregister", 00:06:40.881 "bdev_nvme_cuse_register", 00:06:40.881 "bdev_opal_new_user", 00:06:40.881 "bdev_opal_set_lock_state", 00:06:40.881 "bdev_opal_delete", 00:06:40.881 "bdev_opal_get_info", 00:06:40.881 "bdev_opal_create", 00:06:40.881 "bdev_nvme_opal_revert", 00:06:40.881 "bdev_nvme_opal_init", 00:06:40.881 "bdev_nvme_send_cmd", 00:06:40.881 "bdev_nvme_get_path_iostat", 00:06:40.881 "bdev_nvme_get_mdns_discovery_info", 00:06:40.881 "bdev_nvme_stop_mdns_discovery", 00:06:40.881 "bdev_nvme_start_mdns_discovery", 00:06:40.881 "bdev_nvme_set_multipath_policy", 00:06:40.881 "bdev_nvme_set_preferred_path", 00:06:40.881 "bdev_nvme_get_io_paths", 00:06:40.881 "bdev_nvme_remove_error_injection", 00:06:40.881 "bdev_nvme_add_error_injection", 00:06:40.881 "bdev_nvme_get_discovery_info", 00:06:40.881 "bdev_nvme_stop_discovery", 00:06:40.881 "bdev_nvme_start_discovery", 00:06:40.881 "bdev_nvme_get_controller_health_info", 00:06:40.881 "bdev_nvme_disable_controller", 00:06:40.881 "bdev_nvme_enable_controller", 00:06:40.881 "bdev_nvme_reset_controller", 00:06:40.881 "bdev_nvme_get_transport_statistics", 00:06:40.881 "bdev_nvme_apply_firmware", 00:06:40.881 "bdev_nvme_detach_controller", 00:06:40.881 "bdev_nvme_get_controllers", 00:06:40.881 "bdev_nvme_attach_controller", 00:06:40.881 "bdev_nvme_set_hotplug", 00:06:40.881 "bdev_nvme_set_options", 00:06:40.881 "bdev_passthru_delete", 00:06:40.881 "bdev_passthru_create", 00:06:40.881 "bdev_lvol_set_parent_bdev", 00:06:40.881 "bdev_lvol_set_parent", 00:06:40.881 "bdev_lvol_check_shallow_copy", 00:06:40.881 "bdev_lvol_start_shallow_copy", 00:06:40.881 "bdev_lvol_grow_lvstore", 00:06:40.881 "bdev_lvol_get_lvols", 00:06:40.881 "bdev_lvol_get_lvstores", 00:06:40.881 "bdev_lvol_delete", 00:06:40.881 "bdev_lvol_set_read_only", 00:06:40.881 "bdev_lvol_resize", 00:06:40.881 "bdev_lvol_decouple_parent", 00:06:40.881 "bdev_lvol_inflate", 00:06:40.881 "bdev_lvol_rename", 00:06:40.881 "bdev_lvol_clone_bdev", 00:06:40.881 "bdev_lvol_clone", 00:06:40.881 "bdev_lvol_snapshot", 00:06:40.881 "bdev_lvol_create", 00:06:40.881 "bdev_lvol_delete_lvstore", 00:06:40.881 "bdev_lvol_rename_lvstore", 00:06:40.881 "bdev_lvol_create_lvstore", 00:06:40.881 "bdev_raid_set_options", 00:06:40.881 "bdev_raid_remove_base_bdev", 00:06:40.881 "bdev_raid_add_base_bdev", 00:06:40.881 "bdev_raid_delete", 00:06:40.881 "bdev_raid_create", 00:06:40.881 "bdev_raid_get_bdevs", 00:06:40.881 "bdev_error_inject_error", 00:06:40.881 "bdev_error_delete", 00:06:40.881 "bdev_error_create", 00:06:40.881 "bdev_split_delete", 00:06:40.881 "bdev_split_create", 00:06:40.881 "bdev_delay_delete", 00:06:40.881 "bdev_delay_create", 00:06:40.881 "bdev_delay_update_latency", 00:06:40.881 "bdev_zone_block_delete", 00:06:40.881 "bdev_zone_block_create", 00:06:40.881 "blobfs_create", 00:06:40.881 "blobfs_detect", 00:06:40.881 "blobfs_set_cache_size", 00:06:40.881 "bdev_aio_delete", 00:06:40.881 "bdev_aio_rescan", 00:06:40.881 "bdev_aio_create", 00:06:40.881 "bdev_ftl_set_property", 00:06:40.881 "bdev_ftl_get_properties", 00:06:40.881 "bdev_ftl_get_stats", 00:06:40.881 "bdev_ftl_unmap", 00:06:40.881 "bdev_ftl_unload", 00:06:40.881 "bdev_ftl_delete", 00:06:40.881 "bdev_ftl_load", 00:06:40.881 "bdev_ftl_create", 00:06:40.881 "bdev_virtio_attach_controller", 00:06:40.881 "bdev_virtio_scsi_get_devices", 00:06:40.881 "bdev_virtio_detach_controller", 00:06:40.881 "bdev_virtio_blk_set_hotplug", 00:06:40.881 "bdev_iscsi_delete", 00:06:40.881 "bdev_iscsi_create", 00:06:40.881 "bdev_iscsi_set_options", 00:06:40.881 "accel_error_inject_error", 00:06:40.881 "ioat_scan_accel_module", 00:06:40.881 "dsa_scan_accel_module", 00:06:40.881 "iaa_scan_accel_module", 00:06:40.881 "vfu_virtio_create_scsi_endpoint", 00:06:40.881 "vfu_virtio_scsi_remove_target", 00:06:40.881 "vfu_virtio_scsi_add_target", 00:06:40.881 "vfu_virtio_create_blk_endpoint", 00:06:40.881 "vfu_virtio_delete_endpoint", 00:06:40.881 "keyring_file_remove_key", 00:06:40.881 "keyring_file_add_key", 00:06:40.881 "keyring_linux_set_options", 00:06:40.881 "iscsi_get_histogram", 00:06:40.881 "iscsi_enable_histogram", 00:06:40.881 "iscsi_set_options", 00:06:40.881 "iscsi_get_auth_groups", 00:06:40.881 "iscsi_auth_group_remove_secret", 00:06:40.881 "iscsi_auth_group_add_secret", 00:06:40.881 "iscsi_delete_auth_group", 00:06:40.881 "iscsi_create_auth_group", 00:06:40.881 "iscsi_set_discovery_auth", 00:06:40.881 "iscsi_get_options", 00:06:40.881 "iscsi_target_node_request_logout", 00:06:40.881 "iscsi_target_node_set_redirect", 00:06:40.881 "iscsi_target_node_set_auth", 00:06:40.881 "iscsi_target_node_add_lun", 00:06:40.881 "iscsi_get_stats", 00:06:40.881 "iscsi_get_connections", 00:06:40.881 "iscsi_portal_group_set_auth", 00:06:40.881 "iscsi_start_portal_group", 00:06:40.881 "iscsi_delete_portal_group", 00:06:40.881 "iscsi_create_portal_group", 00:06:40.881 "iscsi_get_portal_groups", 00:06:40.881 "iscsi_delete_target_node", 00:06:40.881 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.881 "iscsi_target_node_add_pg_ig_maps", 00:06:40.881 "iscsi_create_target_node", 00:06:40.881 "iscsi_get_target_nodes", 00:06:40.881 "iscsi_delete_initiator_group", 00:06:40.881 "iscsi_initiator_group_remove_initiators", 00:06:40.881 "iscsi_initiator_group_add_initiators", 00:06:40.881 "iscsi_create_initiator_group", 00:06:40.881 "iscsi_get_initiator_groups", 00:06:40.881 "nvmf_set_crdt", 00:06:40.881 "nvmf_set_config", 00:06:40.881 "nvmf_set_max_subsystems", 00:06:40.881 "nvmf_stop_mdns_prr", 00:06:40.881 "nvmf_publish_mdns_prr", 00:06:40.881 "nvmf_subsystem_get_listeners", 00:06:40.881 "nvmf_subsystem_get_qpairs", 00:06:40.881 "nvmf_subsystem_get_controllers", 00:06:40.881 "nvmf_get_stats", 00:06:40.881 "nvmf_get_transports", 00:06:40.881 "nvmf_create_transport", 00:06:40.881 "nvmf_get_targets", 00:06:40.881 "nvmf_delete_target", 00:06:40.881 "nvmf_create_target", 00:06:40.881 "nvmf_subsystem_allow_any_host", 00:06:40.881 "nvmf_subsystem_remove_host", 00:06:40.881 "nvmf_subsystem_add_host", 00:06:40.881 "nvmf_ns_remove_host", 00:06:40.881 "nvmf_ns_add_host", 00:06:40.881 "nvmf_subsystem_remove_ns", 00:06:40.881 "nvmf_subsystem_add_ns", 00:06:40.881 "nvmf_subsystem_listener_set_ana_state", 00:06:40.881 "nvmf_discovery_get_referrals", 00:06:40.882 "nvmf_discovery_remove_referral", 00:06:40.882 "nvmf_discovery_add_referral", 00:06:40.882 "nvmf_subsystem_remove_listener", 00:06:40.882 "nvmf_subsystem_add_listener", 00:06:40.882 "nvmf_delete_subsystem", 00:06:40.882 "nvmf_create_subsystem", 00:06:40.882 "nvmf_get_subsystems", 00:06:40.882 "env_dpdk_get_mem_stats", 00:06:40.882 "nbd_get_disks", 00:06:40.882 "nbd_stop_disk", 00:06:40.882 "nbd_start_disk", 00:06:40.882 "ublk_recover_disk", 00:06:40.882 "ublk_get_disks", 00:06:40.882 "ublk_stop_disk", 00:06:40.882 "ublk_start_disk", 00:06:40.882 "ublk_destroy_target", 00:06:40.882 "ublk_create_target", 00:06:40.882 "virtio_blk_create_transport", 00:06:40.882 "virtio_blk_get_transports", 00:06:40.882 "vhost_controller_set_coalescing", 00:06:40.882 "vhost_get_controllers", 00:06:40.882 "vhost_delete_controller", 00:06:40.882 "vhost_create_blk_controller", 00:06:40.882 "vhost_scsi_controller_remove_target", 00:06:40.882 "vhost_scsi_controller_add_target", 00:06:40.882 "vhost_start_scsi_controller", 00:06:40.882 "vhost_create_scsi_controller", 00:06:40.882 "thread_set_cpumask", 00:06:40.882 "framework_get_governor", 00:06:40.882 "framework_get_scheduler", 00:06:40.882 "framework_set_scheduler", 00:06:40.882 "framework_get_reactors", 00:06:40.882 "thread_get_io_channels", 00:06:40.882 "thread_get_pollers", 00:06:40.882 "thread_get_stats", 00:06:40.882 "framework_monitor_context_switch", 00:06:40.882 "spdk_kill_instance", 00:06:40.882 "log_enable_timestamps", 00:06:40.882 "log_get_flags", 00:06:40.882 "log_clear_flag", 00:06:40.882 "log_set_flag", 00:06:40.882 "log_get_level", 00:06:40.882 "log_set_level", 00:06:40.882 "log_get_print_level", 00:06:40.882 "log_set_print_level", 00:06:40.882 "framework_enable_cpumask_locks", 00:06:40.882 "framework_disable_cpumask_locks", 00:06:40.882 "framework_wait_init", 00:06:40.882 "framework_start_init", 00:06:40.882 "scsi_get_devices", 00:06:40.882 "bdev_get_histogram", 00:06:40.882 "bdev_enable_histogram", 00:06:40.882 "bdev_set_qos_limit", 00:06:40.882 "bdev_set_qd_sampling_period", 00:06:40.882 "bdev_get_bdevs", 00:06:40.882 "bdev_reset_iostat", 00:06:40.882 "bdev_get_iostat", 00:06:40.882 "bdev_examine", 00:06:40.882 "bdev_wait_for_examine", 00:06:40.882 "bdev_set_options", 00:06:40.882 "notify_get_notifications", 00:06:40.882 "notify_get_types", 00:06:40.882 "accel_get_stats", 00:06:40.882 "accel_set_options", 00:06:40.882 "accel_set_driver", 00:06:40.882 "accel_crypto_key_destroy", 00:06:40.882 "accel_crypto_keys_get", 00:06:40.882 "accel_crypto_key_create", 00:06:40.882 "accel_assign_opc", 00:06:40.882 "accel_get_module_info", 00:06:40.882 "accel_get_opc_assignments", 00:06:40.882 "vmd_rescan", 00:06:40.882 "vmd_remove_device", 00:06:40.882 "vmd_enable", 00:06:40.882 "sock_get_default_impl", 00:06:40.882 "sock_set_default_impl", 00:06:40.882 "sock_impl_set_options", 00:06:40.882 "sock_impl_get_options", 00:06:40.882 "iobuf_get_stats", 00:06:40.882 "iobuf_set_options", 00:06:40.882 "keyring_get_keys", 00:06:40.882 "framework_get_pci_devices", 00:06:40.882 "framework_get_config", 00:06:40.882 "framework_get_subsystems", 00:06:40.882 "vfu_tgt_set_base_path", 00:06:40.882 "trace_get_info", 00:06:40.882 "trace_get_tpoint_group_mask", 00:06:40.882 "trace_disable_tpoint_group", 00:06:40.882 "trace_enable_tpoint_group", 00:06:40.882 "trace_clear_tpoint_mask", 00:06:40.882 "trace_set_tpoint_mask", 00:06:40.882 "spdk_get_version", 00:06:40.882 "rpc_get_methods" 00:06:40.882 ] 00:06:40.882 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.882 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.882 17:54:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2220106 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2220106 ']' 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2220106 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220106 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220106' 00:06:40.882 killing process with pid 2220106 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2220106 00:06:40.882 17:54:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2220106 00:06:41.448 00:06:41.448 real 0m1.197s 00:06:41.448 user 0m2.127s 00:06:41.448 sys 0m0.452s 00:06:41.448 17:54:48 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.448 17:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 END TEST spdkcli_tcp 00:06:41.448 ************************************ 00:06:41.448 17:54:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.448 17:54:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.448 17:54:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.448 17:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.448 17:54:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 START TEST dpdk_mem_utility 00:06:41.448 ************************************ 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.448 * Looking for test storage... 00:06:41.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:41.448 17:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.448 17:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2220426 00:06:41.448 17:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.448 17:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2220426 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2220426 ']' 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.448 17:54:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 [2024-07-23 17:54:48.955678] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:41.448 [2024-07-23 17:54:48.955768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220426 ] 00:06:41.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.448 [2024-07-23 17:54:49.014062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.448 [2024-07-23 17:54:49.104048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.706 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.706 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:41.706 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.706 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.706 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.706 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.706 { 00:06:41.706 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.706 } 00:06:41.706 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.706 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.965 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:41.965 1 heaps totaling size 814.000000 MiB 00:06:41.965 size: 814.000000 MiB heap id: 0 00:06:41.965 end heaps---------- 00:06:41.965 8 mempools totaling size 598.116089 MiB 00:06:41.965 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.965 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.965 size: 84.521057 MiB name: bdev_io_2220426 00:06:41.965 size: 51.011292 MiB name: evtpool_2220426 00:06:41.965 size: 50.003479 MiB name: msgpool_2220426 00:06:41.965 size: 21.763794 MiB name: PDU_Pool 00:06:41.965 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.965 size: 0.026123 MiB name: Session_Pool 00:06:41.965 end mempools------- 00:06:41.965 6 memzones totaling size 4.142822 MiB 00:06:41.965 size: 1.000366 MiB name: RG_ring_0_2220426 00:06:41.965 size: 1.000366 MiB name: RG_ring_1_2220426 00:06:41.965 size: 1.000366 MiB name: RG_ring_4_2220426 00:06:41.965 size: 1.000366 MiB name: RG_ring_5_2220426 00:06:41.965 size: 0.125366 MiB name: RG_ring_2_2220426 00:06:41.965 size: 0.015991 MiB name: RG_ring_3_2220426 00:06:41.965 end memzones------- 00:06:41.965 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.965 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:41.965 list of free elements. size: 12.519348 MiB 00:06:41.965 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.965 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:41.965 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:41.965 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:41.965 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:41.965 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:41.965 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:41.965 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:41.965 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:41.965 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:41.965 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:41.965 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:41.965 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:41.965 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:41.965 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:41.965 list of standard malloc elements. size: 199.218079 MiB 00:06:41.965 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:41.965 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:41.965 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.965 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:41.965 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:41.965 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.965 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:41.965 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.965 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:41.965 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:41.965 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:41.965 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:41.965 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:41.965 list of memzone associated elements. size: 602.262573 MiB 00:06:41.965 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:41.965 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.965 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:41.965 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.965 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:41.965 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2220426_0 00:06:41.965 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.965 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2220426_0 00:06:41.965 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.965 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2220426_0 00:06:41.965 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:41.965 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.965 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:41.965 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.965 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.965 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2220426 00:06:41.965 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.965 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2220426 00:06:41.965 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.965 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2220426 00:06:41.965 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:41.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.965 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:41.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.965 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:41.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.965 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:41.965 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.965 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.965 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2220426 00:06:41.965 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.965 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2220426 00:06:41.965 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:41.965 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2220426 00:06:41.965 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:41.965 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2220426 00:06:41.965 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:41.965 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2220426 00:06:41.965 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:41.965 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.965 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:41.965 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.965 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:41.965 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.965 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:41.965 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2220426 00:06:41.965 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:41.965 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.965 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:41.965 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.966 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:41.966 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2220426 00:06:41.966 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:41.966 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.966 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:41.966 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2220426 00:06:41.966 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:41.966 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2220426 00:06:41.966 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:41.966 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.966 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.966 17:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2220426 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2220426 ']' 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2220426 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220426 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220426' 00:06:41.966 killing process with pid 2220426 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2220426 00:06:41.966 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2220426 00:06:42.224 00:06:42.224 real 0m1.021s 00:06:42.224 user 0m0.984s 00:06:42.224 sys 0m0.396s 00:06:42.224 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.224 17:54:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.224 ************************************ 00:06:42.224 END TEST dpdk_mem_utility 00:06:42.224 ************************************ 00:06:42.482 17:54:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.482 17:54:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.482 17:54:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.482 17:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.482 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.482 ************************************ 00:06:42.482 START TEST event 00:06:42.482 ************************************ 00:06:42.482 17:54:49 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.482 * Looking for test storage... 00:06:42.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.482 17:54:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.482 17:54:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.482 17:54:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.482 17:54:49 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:42.482 17:54:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.482 17:54:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.482 ************************************ 00:06:42.482 START TEST event_perf 00:06:42.482 ************************************ 00:06:42.482 17:54:50 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.482 Running I/O for 1 seconds...[2024-07-23 17:54:50.014332] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:42.482 [2024-07-23 17:54:50.014410] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220591 ] 00:06:42.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.482 [2024-07-23 17:54:50.075194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.740 [2024-07-23 17:54:50.168355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.740 [2024-07-23 17:54:50.168441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.740 [2024-07-23 17:54:50.168444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.740 [2024-07-23 17:54:50.168385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.670 Running I/O for 1 seconds... 00:06:43.670 lcore 0: 224763 00:06:43.670 lcore 1: 224763 00:06:43.670 lcore 2: 224761 00:06:43.670 lcore 3: 224762 00:06:43.670 done. 00:06:43.670 00:06:43.670 real 0m1.243s 00:06:43.670 user 0m4.153s 00:06:43.670 sys 0m0.085s 00:06:43.670 17:54:51 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.670 17:54:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.670 ************************************ 00:06:43.670 END TEST event_perf 00:06:43.670 ************************************ 00:06:43.670 17:54:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.670 17:54:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.670 17:54:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.670 17:54:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.670 17:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.670 ************************************ 00:06:43.670 START TEST event_reactor 00:06:43.670 ************************************ 00:06:43.670 17:54:51 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.670 [2024-07-23 17:54:51.304505] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:43.670 [2024-07-23 17:54:51.304569] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220773 ] 00:06:43.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.961 [2024-07-23 17:54:51.364335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.961 [2024-07-23 17:54:51.447703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.925 test_start 00:06:44.925 oneshot 00:06:44.925 tick 100 00:06:44.925 tick 100 00:06:44.925 tick 250 00:06:44.925 tick 100 00:06:44.925 tick 100 00:06:44.925 tick 250 00:06:44.925 tick 500 00:06:44.925 tick 100 00:06:44.925 tick 100 00:06:44.925 tick 100 00:06:44.925 tick 250 00:06:44.925 tick 100 00:06:44.925 tick 100 00:06:44.925 test_end 00:06:44.925 00:06:44.925 real 0m1.232s 00:06:44.925 user 0m1.150s 00:06:44.925 sys 0m0.078s 00:06:44.925 17:54:52 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.925 17:54:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 ************************************ 00:06:44.925 END TEST event_reactor 00:06:44.925 ************************************ 00:06:44.925 17:54:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.925 17:54:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.925 17:54:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.925 17:54:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.925 17:54:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 ************************************ 00:06:44.925 START TEST event_reactor_perf 00:06:44.925 ************************************ 00:06:44.925 17:54:52 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.925 [2024-07-23 17:54:52.584080] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:44.925 [2024-07-23 17:54:52.584149] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220935 ] 00:06:45.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.183 [2024-07-23 17:54:52.641884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.183 [2024-07-23 17:54:52.724966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.557 test_start 00:06:46.557 test_end 00:06:46.557 Performance: 448689 events per second 00:06:46.557 00:06:46.557 real 0m1.230s 00:06:46.557 user 0m1.153s 00:06:46.557 sys 0m0.073s 00:06:46.557 17:54:53 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.557 17:54:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.557 ************************************ 00:06:46.557 END TEST event_reactor_perf 00:06:46.557 ************************************ 00:06:46.557 17:54:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.557 17:54:53 event -- event/event.sh@49 -- # uname -s 00:06:46.557 17:54:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.557 17:54:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.557 17:54:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.557 17:54:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.557 17:54:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.557 ************************************ 00:06:46.557 START TEST event_scheduler 00:06:46.557 ************************************ 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.557 * Looking for test storage... 00:06:46.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:46.557 17:54:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.557 17:54:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2221113 00:06:46.557 17:54:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.557 17:54:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.557 17:54:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2221113 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2221113 ']' 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.557 17:54:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.557 [2024-07-23 17:54:53.946923] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:46.557 [2024-07-23 17:54:53.946998] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221113 ] 00:06:46.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.557 [2024-07-23 17:54:54.004910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.557 [2024-07-23 17:54:54.092776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.557 [2024-07-23 17:54:54.092842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.557 [2024-07-23 17:54:54.092907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.557 [2024-07-23 17:54:54.092909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:46.557 17:54:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.557 [2024-07-23 17:54:54.169721] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.557 [2024-07-23 17:54:54.169746] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.557 [2024-07-23 17:54:54.169761] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.557 [2024-07-23 17:54:54.169772] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.557 [2024-07-23 17:54:54.169787] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.557 17:54:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.557 17:54:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 [2024-07-23 17:54:54.265131] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.816 17:54:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.816 17:54:54 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.816 17:54:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 ************************************ 00:06:46.816 START TEST scheduler_create_thread 00:06:46.816 ************************************ 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 2 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 3 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 4 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 5 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.816 6 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.816 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 7 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 8 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 9 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 10 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.817 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.383 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.383 00:06:47.383 real 0m0.591s 00:06:47.383 user 0m0.013s 00:06:47.383 sys 0m0.001s 00:06:47.383 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.383 17:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.383 ************************************ 00:06:47.383 END TEST scheduler_create_thread 00:06:47.383 ************************************ 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:47.383 17:54:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:47.383 17:54:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2221113 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2221113 ']' 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2221113 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221113 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221113' 00:06:47.383 killing process with pid 2221113 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2221113 00:06:47.383 17:54:54 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2221113 00:06:47.949 [2024-07-23 17:54:55.365335] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.949 00:06:47.949 real 0m1.730s 00:06:47.949 user 0m2.291s 00:06:47.949 sys 0m0.325s 00:06:47.949 17:54:55 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.949 17:54:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.949 ************************************ 00:06:47.949 END TEST event_scheduler 00:06:47.949 ************************************ 00:06:47.949 17:54:55 event -- common/autotest_common.sh@1142 -- # return 0 00:06:47.949 17:54:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.949 17:54:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.949 17:54:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.949 17:54:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.949 17:54:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.208 ************************************ 00:06:48.208 START TEST app_repeat 00:06:48.208 ************************************ 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2221424 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2221424' 00:06:48.208 Process app_repeat pid: 2221424 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:48.208 spdk_app_start Round 0 00:06:48.208 17:54:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2221424 /var/tmp/spdk-nbd.sock 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2221424 ']' 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.208 17:54:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.208 [2024-07-23 17:54:55.655383] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:06:48.208 [2024-07-23 17:54:55.655450] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221424 ] 00:06:48.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.208 [2024-07-23 17:54:55.714972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.208 [2024-07-23 17:54:55.803216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.208 [2024-07-23 17:54:55.803218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.466 17:54:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.466 17:54:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.466 17:54:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.724 Malloc0 00:06:48.724 17:54:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.982 Malloc1 00:06:48.982 17:54:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.982 17:54:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.240 /dev/nbd0 00:06:49.240 17:54:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.240 17:54:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.240 1+0 records in 00:06:49.240 1+0 records out 00:06:49.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192951 s, 21.2 MB/s 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.240 17:54:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.240 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.240 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.240 17:54:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.498 /dev/nbd1 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.498 1+0 records in 00:06:49.498 1+0 records out 00:06:49.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238292 s, 17.2 MB/s 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.498 17:54:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.498 17:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.756 { 00:06:49.756 "nbd_device": "/dev/nbd0", 00:06:49.756 "bdev_name": "Malloc0" 00:06:49.756 }, 00:06:49.756 { 00:06:49.756 "nbd_device": "/dev/nbd1", 00:06:49.756 "bdev_name": "Malloc1" 00:06:49.756 } 00:06:49.756 ]' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.756 { 00:06:49.756 "nbd_device": "/dev/nbd0", 00:06:49.756 "bdev_name": "Malloc0" 00:06:49.756 }, 00:06:49.756 { 00:06:49.756 "nbd_device": "/dev/nbd1", 00:06:49.756 "bdev_name": "Malloc1" 00:06:49.756 } 00:06:49.756 ]' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.756 /dev/nbd1' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.756 /dev/nbd1' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.756 256+0 records in 00:06:49.756 256+0 records out 00:06:49.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501659 s, 209 MB/s 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.756 256+0 records in 00:06:49.756 256+0 records out 00:06:49.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208561 s, 50.3 MB/s 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.756 256+0 records in 00:06:49.756 256+0 records out 00:06:49.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230866 s, 45.4 MB/s 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.756 17:54:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.014 17:54:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.271 17:54:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.529 17:54:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.529 17:54:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.098 17:54:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.098 [2024-07-23 17:54:58.677014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.098 [2024-07-23 17:54:58.757857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.098 [2024-07-23 17:54:58.757863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.356 [2024-07-23 17:54:58.815628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.356 [2024-07-23 17:54:58.815698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.882 17:55:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.882 17:55:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.882 spdk_app_start Round 1 00:06:53.882 17:55:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2221424 /var/tmp/spdk-nbd.sock 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2221424 ']' 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.882 17:55:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.140 17:55:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.140 17:55:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:54.140 17:55:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.398 Malloc0 00:06:54.398 17:55:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.656 Malloc1 00:06:54.656 17:55:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.656 17:55:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.914 /dev/nbd0 00:06:54.914 17:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.914 17:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.914 1+0 records in 00:06:54.914 1+0 records out 00:06:54.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160478 s, 25.5 MB/s 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.914 17:55:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.914 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.914 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.915 17:55:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.172 /dev/nbd1 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.172 1+0 records in 00:06:55.172 1+0 records out 00:06:55.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198444 s, 20.6 MB/s 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:55.172 17:55:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.172 17:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd0", 00:06:55.430 "bdev_name": "Malloc0" 00:06:55.430 }, 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd1", 00:06:55.430 "bdev_name": "Malloc1" 00:06:55.430 } 00:06:55.430 ]' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd0", 00:06:55.430 "bdev_name": "Malloc0" 00:06:55.430 }, 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd1", 00:06:55.430 "bdev_name": "Malloc1" 00:06:55.430 } 00:06:55.430 ]' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.430 /dev/nbd1' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.430 /dev/nbd1' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.430 17:55:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.430 256+0 records in 00:06:55.430 256+0 records out 00:06:55.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513113 s, 204 MB/s 00:06:55.431 17:55:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.431 17:55:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.688 256+0 records in 00:06:55.688 256+0 records out 00:06:55.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210306 s, 49.9 MB/s 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.688 256+0 records in 00:06:55.688 256+0 records out 00:06:55.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022495 s, 46.6 MB/s 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.688 17:55:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.689 17:55:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.689 17:55:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.689 17:55:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.689 17:55:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.689 17:55:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.946 17:55:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.202 17:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.459 17:55:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.459 17:55:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.716 17:55:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.973 [2024-07-23 17:55:04.445872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.973 [2024-07-23 17:55:04.534666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.973 [2024-07-23 17:55:04.534670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.973 [2024-07-23 17:55:04.588956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.973 [2024-07-23 17:55:04.589016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.249 17:55:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.249 17:55:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:00.249 spdk_app_start Round 2 00:07:00.249 17:55:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2221424 /var/tmp/spdk-nbd.sock 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2221424 ']' 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.249 17:55:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:00.249 17:55:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.249 Malloc0 00:07:00.249 17:55:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.507 Malloc1 00:07:00.507 17:55:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.507 17:55:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.764 /dev/nbd0 00:07:00.764 17:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.764 17:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.764 1+0 records in 00:07:00.764 1+0 records out 00:07:00.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142768 s, 28.7 MB/s 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:00.764 17:55:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:00.764 17:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.765 17:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.765 17:55:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.023 /dev/nbd1 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.023 1+0 records in 00:07:01.023 1+0 records out 00:07:01.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206663 s, 19.8 MB/s 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.023 17:55:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.023 17:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.281 { 00:07:01.281 "nbd_device": "/dev/nbd0", 00:07:01.281 "bdev_name": "Malloc0" 00:07:01.281 }, 00:07:01.281 { 00:07:01.281 "nbd_device": "/dev/nbd1", 00:07:01.281 "bdev_name": "Malloc1" 00:07:01.281 } 00:07:01.281 ]' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.281 { 00:07:01.281 "nbd_device": "/dev/nbd0", 00:07:01.281 "bdev_name": "Malloc0" 00:07:01.281 }, 00:07:01.281 { 00:07:01.281 "nbd_device": "/dev/nbd1", 00:07:01.281 "bdev_name": "Malloc1" 00:07:01.281 } 00:07:01.281 ]' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.281 /dev/nbd1' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.281 /dev/nbd1' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.281 256+0 records in 00:07:01.281 256+0 records out 00:07:01.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377484 s, 278 MB/s 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.281 256+0 records in 00:07:01.281 256+0 records out 00:07:01.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210396 s, 49.8 MB/s 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.281 256+0 records in 00:07:01.281 256+0 records out 00:07:01.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229391 s, 45.7 MB/s 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.281 17:55:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.539 17:55:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.797 17:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.055 17:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.055 17:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.055 17:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.313 17:55:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.313 17:55:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.570 17:55:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.570 [2024-07-23 17:55:10.218284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.828 [2024-07-23 17:55:10.297254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.828 [2024-07-23 17:55:10.297257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.828 [2024-07-23 17:55:10.354869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.828 [2024-07-23 17:55:10.354940] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.356 17:55:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2221424 /var/tmp/spdk-nbd.sock 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2221424 ']' 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.356 17:55:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:05.614 17:55:13 event.app_repeat -- event/event.sh@39 -- # killprocess 2221424 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2221424 ']' 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2221424 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.614 17:55:13 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221424 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221424' 00:07:05.873 killing process with pid 2221424 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2221424 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2221424 00:07:05.873 spdk_app_start is called in Round 0. 00:07:05.873 Shutdown signal received, stop current app iteration 00:07:05.873 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 reinitialization... 00:07:05.873 spdk_app_start is called in Round 1. 00:07:05.873 Shutdown signal received, stop current app iteration 00:07:05.873 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 reinitialization... 00:07:05.873 spdk_app_start is called in Round 2. 00:07:05.873 Shutdown signal received, stop current app iteration 00:07:05.873 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 reinitialization... 00:07:05.873 spdk_app_start is called in Round 3. 00:07:05.873 Shutdown signal received, stop current app iteration 00:07:05.873 17:55:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:05.873 17:55:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:05.873 00:07:05.873 real 0m17.844s 00:07:05.873 user 0m38.985s 00:07:05.873 sys 0m3.136s 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.873 17:55:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.873 ************************************ 00:07:05.873 END TEST app_repeat 00:07:05.873 ************************************ 00:07:05.873 17:55:13 event -- common/autotest_common.sh@1142 -- # return 0 00:07:05.873 17:55:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:05.873 17:55:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:05.873 17:55:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.873 17:55:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.873 17:55:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.873 ************************************ 00:07:05.873 START TEST cpu_locks 00:07:05.873 ************************************ 00:07:05.873 17:55:13 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.131 * Looking for test storage... 00:07:06.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.131 17:55:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:06.131 17:55:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:06.131 17:55:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:06.131 17:55:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:06.131 17:55:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.131 17:55:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.131 17:55:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.131 ************************************ 00:07:06.131 START TEST default_locks 00:07:06.131 ************************************ 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2223774 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2223774 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2223774 ']' 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.131 17:55:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.131 [2024-07-23 17:55:13.645027] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:06.131 [2024-07-23 17:55:13.645131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223774 ] 00:07:06.131 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.131 [2024-07-23 17:55:13.702119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.131 [2024-07-23 17:55:13.785631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.388 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.388 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:06.388 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2223774 00:07:06.388 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2223774 00:07:06.388 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.645 lslocks: write error 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2223774 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2223774 ']' 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2223774 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.645 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223774 00:07:06.929 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.929 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.929 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223774' 00:07:06.929 killing process with pid 2223774 00:07:06.929 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2223774 00:07:06.929 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2223774 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2223774 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2223774 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2223774 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2223774 ']' 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.216 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2223774) - No such process 00:07:07.217 ERROR: process (pid: 2223774) is no longer running 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.217 00:07:07.217 real 0m1.120s 00:07:07.217 user 0m1.046s 00:07:07.217 sys 0m0.496s 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.217 17:55:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 ************************************ 00:07:07.217 END TEST default_locks 00:07:07.217 ************************************ 00:07:07.217 17:55:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:07.217 17:55:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.217 17:55:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.217 17:55:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.217 17:55:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 ************************************ 00:07:07.217 START TEST default_locks_via_rpc 00:07:07.217 ************************************ 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2223937 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2223937 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2223937 ']' 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.217 17:55:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 [2024-07-23 17:55:14.818427] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:07.217 [2024-07-23 17:55:14.818536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223937 ] 00:07:07.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.474 [2024-07-23 17:55:14.882117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.474 [2024-07-23 17:55:14.970546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2223937 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2223937 00:07:07.732 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2223937 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2223937 ']' 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2223937 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223937 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223937' 00:07:07.990 killing process with pid 2223937 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2223937 00:07:07.990 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2223937 00:07:08.249 00:07:08.249 real 0m1.109s 00:07:08.249 user 0m1.066s 00:07:08.249 sys 0m0.481s 00:07:08.249 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.249 17:55:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.249 ************************************ 00:07:08.249 END TEST default_locks_via_rpc 00:07:08.249 ************************************ 00:07:08.249 17:55:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.249 17:55:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.249 17:55:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.249 17:55:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.249 17:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.507 ************************************ 00:07:08.507 START TEST non_locking_app_on_locked_coremask 00:07:08.507 ************************************ 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2224100 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2224100 /var/tmp/spdk.sock 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224100 ']' 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.507 17:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.507 [2024-07-23 17:55:15.975394] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:08.507 [2024-07-23 17:55:15.975488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224100 ] 00:07:08.507 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.507 [2024-07-23 17:55:16.030626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.507 [2024-07-23 17:55:16.110259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2224109 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2224109 /var/tmp/spdk2.sock 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224109 ']' 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.765 17:55:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.765 [2024-07-23 17:55:16.382065] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:08.765 [2024-07-23 17:55:16.382172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224109 ] 00:07:08.765 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.023 [2024-07-23 17:55:16.464021] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.023 [2024-07-23 17:55:16.464047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.023 [2024-07-23 17:55:16.630901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.954 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.955 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.955 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2224100 00:07:09.955 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2224100 00:07:09.955 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.212 lslocks: write error 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2224100 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2224100 ']' 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2224100 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224100 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224100' 00:07:10.212 killing process with pid 2224100 00:07:10.212 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2224100 00:07:10.213 17:55:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2224100 00:07:11.144 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2224109 00:07:11.144 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2224109 ']' 00:07:11.144 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2224109 00:07:11.144 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224109 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224109' 00:07:11.145 killing process with pid 2224109 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2224109 00:07:11.145 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2224109 00:07:11.402 00:07:11.402 real 0m3.020s 00:07:11.402 user 0m3.157s 00:07:11.402 sys 0m1.015s 00:07:11.402 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.402 17:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.402 ************************************ 00:07:11.402 END TEST non_locking_app_on_locked_coremask 00:07:11.402 ************************************ 00:07:11.402 17:55:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.402 17:55:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.402 17:55:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.402 17:55:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.402 17:55:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.402 ************************************ 00:07:11.402 START TEST locking_app_on_unlocked_coremask 00:07:11.402 ************************************ 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2224414 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2224414 /var/tmp/spdk.sock 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224414 ']' 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.402 17:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.402 [2024-07-23 17:55:19.046719] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:11.402 [2024-07-23 17:55:19.046789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224414 ] 00:07:11.659 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.659 [2024-07-23 17:55:19.103112] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.659 [2024-07-23 17:55:19.103151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.659 [2024-07-23 17:55:19.192096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2224539 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2224539 /var/tmp/spdk2.sock 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224539 ']' 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.917 17:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.917 [2024-07-23 17:55:19.481958] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:11.917 [2024-07-23 17:55:19.482060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224539 ] 00:07:11.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.917 [2024-07-23 17:55:19.563876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.174 [2024-07-23 17:55:19.730944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.105 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.105 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.105 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2224539 00:07:13.105 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2224539 00:07:13.105 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.363 lslocks: write error 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2224414 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2224414 ']' 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2224414 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224414 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224414' 00:07:13.363 killing process with pid 2224414 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2224414 00:07:13.363 17:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2224414 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2224539 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2224539 ']' 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2224539 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224539 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224539' 00:07:14.295 killing process with pid 2224539 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2224539 00:07:14.295 17:55:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2224539 00:07:14.553 00:07:14.553 real 0m3.116s 00:07:14.553 user 0m3.277s 00:07:14.553 sys 0m1.022s 00:07:14.553 17:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.553 17:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.553 ************************************ 00:07:14.553 END TEST locking_app_on_unlocked_coremask 00:07:14.553 ************************************ 00:07:14.553 17:55:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.553 17:55:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.554 17:55:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.554 17:55:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.554 17:55:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.554 ************************************ 00:07:14.554 START TEST locking_app_on_locked_coremask 00:07:14.554 ************************************ 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2224848 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2224848 /var/tmp/spdk.sock 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224848 ']' 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.554 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.812 [2024-07-23 17:55:22.216110] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:14.812 [2024-07-23 17:55:22.216199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224848 ] 00:07:14.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.812 [2024-07-23 17:55:22.272321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.812 [2024-07-23 17:55:22.352304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2224972 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2224972 /var/tmp/spdk2.sock 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2224972 /var/tmp/spdk2.sock 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2224972 /var/tmp/spdk2.sock 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2224972 ']' 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.070 17:55:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.070 [2024-07-23 17:55:22.627710] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:15.070 [2024-07-23 17:55:22.627799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224972 ] 00:07:15.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.070 [2024-07-23 17:55:22.718280] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2224848 has claimed it. 00:07:15.070 [2024-07-23 17:55:22.718373] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2224972) - No such process 00:07:16.002 ERROR: process (pid: 2224972) is no longer running 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.002 lslocks: write error 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2224848 ']' 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224848' 00:07:16.002 killing process with pid 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2224848 00:07:16.002 17:55:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2224848 00:07:16.566 00:07:16.566 real 0m1.840s 00:07:16.566 user 0m2.007s 00:07:16.566 sys 0m0.590s 00:07:16.566 17:55:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.566 17:55:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.566 ************************************ 00:07:16.566 END TEST locking_app_on_locked_coremask 00:07:16.566 ************************************ 00:07:16.566 17:55:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:16.566 17:55:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:16.566 17:55:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.566 17:55:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.566 17:55:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.566 ************************************ 00:07:16.566 START TEST locking_overlapped_coremask 00:07:16.566 ************************************ 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2225141 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2225141 /var/tmp/spdk.sock 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2225141 ']' 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.566 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.566 [2024-07-23 17:55:24.106896] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:16.566 [2024-07-23 17:55:24.107002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225141 ] 00:07:16.566 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.566 [2024-07-23 17:55:24.165123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.824 [2024-07-23 17:55:24.256967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.824 [2024-07-23 17:55:24.257032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.824 [2024-07-23 17:55:24.257034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2225151 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2225151 /var/tmp/spdk2.sock 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:17.081 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2225151 /var/tmp/spdk2.sock 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2225151 /var/tmp/spdk2.sock 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2225151 ']' 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.082 17:55:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.082 [2024-07-23 17:55:24.552155] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:17.082 [2024-07-23 17:55:24.552234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225151 ] 00:07:17.082 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.082 [2024-07-23 17:55:24.641668] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2225141 has claimed it. 00:07:17.082 [2024-07-23 17:55:24.641729] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2225151) - No such process 00:07:17.648 ERROR: process (pid: 2225151) is no longer running 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2225141 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2225141 ']' 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2225141 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225141 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225141' 00:07:17.648 killing process with pid 2225141 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2225141 00:07:17.648 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2225141 00:07:18.214 00:07:18.214 real 0m1.609s 00:07:18.214 user 0m4.384s 00:07:18.214 sys 0m0.433s 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.214 ************************************ 00:07:18.214 END TEST locking_overlapped_coremask 00:07:18.214 ************************************ 00:07:18.214 17:55:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:18.214 17:55:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.214 17:55:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.214 17:55:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.214 17:55:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.214 ************************************ 00:07:18.214 START TEST locking_overlapped_coremask_via_rpc 00:07:18.214 ************************************ 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2225321 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2225321 /var/tmp/spdk.sock 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2225321 ']' 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.214 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.215 17:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.215 [2024-07-23 17:55:25.767925] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:18.215 [2024-07-23 17:55:25.768009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225321 ] 00:07:18.215 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.215 [2024-07-23 17:55:25.823586] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.215 [2024-07-23 17:55:25.823630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.472 [2024-07-23 17:55:25.910185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.472 [2024-07-23 17:55:25.910290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.472 [2024-07-23 17:55:25.910297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2225446 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2225446 /var/tmp/spdk2.sock 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2225446 ']' 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.731 17:55:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.731 [2024-07-23 17:55:26.201409] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:18.731 [2024-07-23 17:55:26.201496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225446 ] 00:07:18.731 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.731 [2024-07-23 17:55:26.289721] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.731 [2024-07-23 17:55:26.289761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.989 [2024-07-23 17:55:26.465858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.989 [2024-07-23 17:55:26.465920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.989 [2024-07-23 17:55:26.465922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.554 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.555 [2024-07-23 17:55:27.148411] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2225321 has claimed it. 00:07:19.555 request: 00:07:19.555 { 00:07:19.555 "method": "framework_enable_cpumask_locks", 00:07:19.555 "req_id": 1 00:07:19.555 } 00:07:19.555 Got JSON-RPC error response 00:07:19.555 response: 00:07:19.555 { 00:07:19.555 "code": -32603, 00:07:19.555 "message": "Failed to claim CPU core: 2" 00:07:19.555 } 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2225321 /var/tmp/spdk.sock 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2225321 ']' 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.555 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.812 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.812 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.812 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2225446 /var/tmp/spdk2.sock 00:07:19.812 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2225446 ']' 00:07:19.812 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.813 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.813 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.813 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.813 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.071 00:07:20.071 real 0m1.923s 00:07:20.071 user 0m1.014s 00:07:20.071 sys 0m0.177s 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.071 17:55:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.071 ************************************ 00:07:20.071 END TEST locking_overlapped_coremask_via_rpc 00:07:20.071 ************************************ 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:20.071 17:55:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:20.071 17:55:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2225321 ]] 00:07:20.071 17:55:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2225321 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2225321 ']' 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2225321 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225321 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225321' 00:07:20.071 killing process with pid 2225321 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2225321 00:07:20.071 17:55:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2225321 00:07:20.636 17:55:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2225446 ]] 00:07:20.636 17:55:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2225446 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2225446 ']' 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2225446 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225446 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225446' 00:07:20.636 killing process with pid 2225446 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2225446 00:07:20.636 17:55:28 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2225446 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2225321 ]] 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2225321 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2225321 ']' 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2225321 00:07:20.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2225321) - No such process 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2225321 is not found' 00:07:20.895 Process with pid 2225321 is not found 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2225446 ]] 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2225446 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2225446 ']' 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2225446 00:07:20.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2225446) - No such process 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2225446 is not found' 00:07:20.895 Process with pid 2225446 is not found 00:07:20.895 17:55:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.895 00:07:20.895 real 0m14.983s 00:07:20.895 user 0m26.510s 00:07:20.895 sys 0m5.091s 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.895 17:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.895 ************************************ 00:07:20.895 END TEST cpu_locks 00:07:20.895 ************************************ 00:07:20.895 17:55:28 event -- common/autotest_common.sh@1142 -- # return 0 00:07:20.895 00:07:20.895 real 0m38.607s 00:07:20.895 user 1m14.365s 00:07:20.895 sys 0m9.029s 00:07:20.895 17:55:28 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.895 17:55:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.895 ************************************ 00:07:20.895 END TEST event 00:07:20.895 ************************************ 00:07:21.153 17:55:28 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.153 17:55:28 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.153 17:55:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.153 17:55:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.153 17:55:28 -- common/autotest_common.sh@10 -- # set +x 00:07:21.153 ************************************ 00:07:21.153 START TEST thread 00:07:21.153 ************************************ 00:07:21.153 17:55:28 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.153 * Looking for test storage... 00:07:21.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:21.153 17:55:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.153 17:55:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:21.153 17:55:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.153 17:55:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.153 ************************************ 00:07:21.153 START TEST thread_poller_perf 00:07:21.153 ************************************ 00:07:21.153 17:55:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.153 [2024-07-23 17:55:28.677543] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:21.153 [2024-07-23 17:55:28.677609] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225809 ] 00:07:21.153 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.153 [2024-07-23 17:55:28.734348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.411 [2024-07-23 17:55:28.816963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.411 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:22.343 ====================================== 00:07:22.343 busy:2708745588 (cyc) 00:07:22.343 total_run_count: 364000 00:07:22.343 tsc_hz: 2700000000 (cyc) 00:07:22.343 ====================================== 00:07:22.343 poller_cost: 7441 (cyc), 2755 (nsec) 00:07:22.343 00:07:22.343 real 0m1.235s 00:07:22.343 user 0m1.162s 00:07:22.343 sys 0m0.068s 00:07:22.343 17:55:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.343 17:55:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 ************************************ 00:07:22.343 END TEST thread_poller_perf 00:07:22.343 ************************************ 00:07:22.343 17:55:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:22.343 17:55:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.343 17:55:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.343 17:55:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.343 17:55:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.343 ************************************ 00:07:22.343 START TEST thread_poller_perf 00:07:22.343 ************************************ 00:07:22.343 17:55:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.343 [2024-07-23 17:55:29.967155] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:22.343 [2024-07-23 17:55:29.967224] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225968 ] 00:07:22.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.601 [2024-07-23 17:55:30.030137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.601 [2024-07-23 17:55:30.118309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.601 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.561 ====================================== 00:07:23.561 busy:2702011374 (cyc) 00:07:23.561 total_run_count: 4650000 00:07:23.561 tsc_hz: 2700000000 (cyc) 00:07:23.561 ====================================== 00:07:23.561 poller_cost: 581 (cyc), 215 (nsec) 00:07:23.561 00:07:23.561 real 0m1.242s 00:07:23.561 user 0m1.158s 00:07:23.561 sys 0m0.078s 00:07:23.561 17:55:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.561 17:55:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.561 ************************************ 00:07:23.561 END TEST thread_poller_perf 00:07:23.561 ************************************ 00:07:23.561 17:55:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:23.561 17:55:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.561 00:07:23.561 real 0m2.634s 00:07:23.561 user 0m2.393s 00:07:23.561 sys 0m0.242s 00:07:23.561 17:55:31 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.561 17:55:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.561 ************************************ 00:07:23.561 END TEST thread 00:07:23.561 ************************************ 00:07:23.819 17:55:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:23.820 17:55:31 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:23.820 17:55:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.820 17:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.820 17:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.820 ************************************ 00:07:23.820 START TEST accel 00:07:23.820 ************************************ 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:23.820 * Looking for test storage... 00:07:23.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:23.820 17:55:31 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:23.820 17:55:31 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:23.820 17:55:31 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.820 17:55:31 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2226168 00:07:23.820 17:55:31 accel -- accel/accel.sh@63 -- # waitforlisten 2226168 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@829 -- # '[' -z 2226168 ']' 00:07:23.820 17:55:31 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:23.820 17:55:31 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.820 17:55:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.820 17:55:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.820 17:55:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.820 17:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.820 17:55:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.820 17:55:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.820 17:55:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:23.820 17:55:31 accel -- accel/accel.sh@41 -- # jq -r . 00:07:23.820 [2024-07-23 17:55:31.373970] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:23.820 [2024-07-23 17:55:31.374046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226168 ] 00:07:23.820 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.820 [2024-07-23 17:55:31.435766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.078 [2024-07-23 17:55:31.527213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@862 -- # return 0 00:07:24.336 17:55:31 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:24.336 17:55:31 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:24.336 17:55:31 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:24.336 17:55:31 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:24.336 17:55:31 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:24.336 17:55:31 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.336 17:55:31 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.336 17:55:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.336 17:55:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.336 17:55:31 accel -- accel/accel.sh@75 -- # killprocess 2226168 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@948 -- # '[' -z 2226168 ']' 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@952 -- # kill -0 2226168 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@953 -- # uname 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2226168 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2226168' 00:07:24.336 killing process with pid 2226168 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@967 -- # kill 2226168 00:07:24.336 17:55:31 accel -- common/autotest_common.sh@972 -- # wait 2226168 00:07:24.595 17:55:32 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:24.595 17:55:32 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:24.595 17:55:32 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.595 17:55:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.595 17:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.853 17:55:32 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:24.853 17:55:32 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:24.853 17:55:32 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.853 17:55:32 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:24.853 17:55:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.853 17:55:32 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:24.853 17:55:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.853 17:55:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.853 17:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.853 ************************************ 00:07:24.853 START TEST accel_missing_filename 00:07:24.853 ************************************ 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.853 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:24.853 17:55:32 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:24.853 [2024-07-23 17:55:32.341536] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:24.853 [2024-07-23 17:55:32.341600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226333 ] 00:07:24.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.853 [2024-07-23 17:55:32.399196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.853 [2024-07-23 17:55:32.483177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.111 [2024-07-23 17:55:32.540406] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.111 [2024-07-23 17:55:32.616693] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:25.111 A filename is required. 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.111 00:07:25.111 real 0m0.368s 00:07:25.111 user 0m0.268s 00:07:25.111 sys 0m0.133s 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.111 17:55:32 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:25.111 ************************************ 00:07:25.111 END TEST accel_missing_filename 00:07:25.111 ************************************ 00:07:25.111 17:55:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.111 17:55:32 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.111 17:55:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:25.111 17:55:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.111 17:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.111 ************************************ 00:07:25.111 START TEST accel_compress_verify 00:07:25.111 ************************************ 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.111 17:55:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:25.111 17:55:32 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:25.111 [2024-07-23 17:55:32.758409] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:25.111 [2024-07-23 17:55:32.758471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226476 ] 00:07:25.369 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.369 [2024-07-23 17:55:32.815275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.369 [2024-07-23 17:55:32.898493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.369 [2024-07-23 17:55:32.953986] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.627 [2024-07-23 17:55:33.037816] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:25.627 00:07:25.627 Compression does not support the verify option, aborting. 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.628 00:07:25.628 real 0m0.376s 00:07:25.628 user 0m0.278s 00:07:25.628 sys 0m0.133s 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.628 17:55:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.628 ************************************ 00:07:25.628 END TEST accel_compress_verify 00:07:25.628 ************************************ 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.628 17:55:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.628 ************************************ 00:07:25.628 START TEST accel_wrong_workload 00:07:25.628 ************************************ 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:25.628 17:55:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:25.628 Unsupported workload type: foobar 00:07:25.628 [2024-07-23 17:55:33.179888] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:25.628 accel_perf options: 00:07:25.628 [-h help message] 00:07:25.628 [-q queue depth per core] 00:07:25.628 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:25.628 [-T number of threads per core 00:07:25.628 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:25.628 [-t time in seconds] 00:07:25.628 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:25.628 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:25.628 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:25.628 [-l for compress/decompress workloads, name of uncompressed input file 00:07:25.628 [-S for crc32c workload, use this seed value (default 0) 00:07:25.628 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:25.628 [-f for fill workload, use this BYTE value (default 255) 00:07:25.628 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:25.628 [-y verify result if this switch is on] 00:07:25.628 [-a tasks to allocate per core (default: same value as -q)] 00:07:25.628 Can be used to spread operations across a wider range of memory. 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.628 00:07:25.628 real 0m0.023s 00:07:25.628 user 0m0.012s 00:07:25.628 sys 0m0.011s 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.628 17:55:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:25.628 ************************************ 00:07:25.628 END TEST accel_wrong_workload 00:07:25.628 ************************************ 00:07:25.628 Error: writing output failed: Broken pipe 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.628 17:55:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.628 ************************************ 00:07:25.628 START TEST accel_negative_buffers 00:07:25.628 ************************************ 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:25.628 17:55:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:25.628 -x option must be non-negative. 00:07:25.628 [2024-07-23 17:55:33.253431] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:25.628 accel_perf options: 00:07:25.628 [-h help message] 00:07:25.628 [-q queue depth per core] 00:07:25.628 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:25.628 [-T number of threads per core 00:07:25.628 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:25.628 [-t time in seconds] 00:07:25.628 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:25.628 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:25.628 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:25.628 [-l for compress/decompress workloads, name of uncompressed input file 00:07:25.628 [-S for crc32c workload, use this seed value (default 0) 00:07:25.628 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:25.628 [-f for fill workload, use this BYTE value (default 255) 00:07:25.628 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:25.628 [-y verify result if this switch is on] 00:07:25.628 [-a tasks to allocate per core (default: same value as -q)] 00:07:25.628 Can be used to spread operations across a wider range of memory. 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.628 00:07:25.628 real 0m0.024s 00:07:25.628 user 0m0.015s 00:07:25.628 sys 0m0.009s 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.628 17:55:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:25.628 ************************************ 00:07:25.628 END TEST accel_negative_buffers 00:07:25.628 ************************************ 00:07:25.628 Error: writing output failed: Broken pipe 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.628 17:55:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.628 17:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.885 ************************************ 00:07:25.885 START TEST accel_crc32c 00:07:25.885 ************************************ 00:07:25.885 17:55:33 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:25.885 [2024-07-23 17:55:33.322386] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:25.885 [2024-07-23 17:55:33.322450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226542 ] 00:07:25.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.885 [2024-07-23 17:55:33.379307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.885 [2024-07-23 17:55:33.462502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.885 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.886 17:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:27.256 17:55:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.256 00:07:27.256 real 0m1.370s 00:07:27.256 user 0m1.237s 00:07:27.256 sys 0m0.136s 00:07:27.256 17:55:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.256 17:55:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.256 ************************************ 00:07:27.256 END TEST accel_crc32c 00:07:27.256 ************************************ 00:07:27.256 17:55:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.256 17:55:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:27.256 17:55:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.256 17:55:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.256 17:55:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.257 ************************************ 00:07:27.257 START TEST accel_crc32c_C2 00:07:27.257 ************************************ 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.257 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:27.257 [2024-07-23 17:55:34.734829] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:27.257 [2024-07-23 17:55:34.734883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226699 ] 00:07:27.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.257 [2024-07-23 17:55:34.790658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.257 [2024-07-23 17:55:34.877614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.515 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.516 17:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.449 00:07:28.449 real 0m1.366s 00:07:28.449 user 0m1.233s 00:07:28.449 sys 0m0.135s 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.449 17:55:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:28.449 ************************************ 00:07:28.449 END TEST accel_crc32c_C2 00:07:28.449 ************************************ 00:07:28.707 17:55:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.707 17:55:36 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:28.707 17:55:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:28.707 17:55:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.707 17:55:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.707 ************************************ 00:07:28.707 START TEST accel_copy 00:07:28.707 ************************************ 00:07:28.707 17:55:36 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:28.707 [2024-07-23 17:55:36.153510] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:28.707 [2024-07-23 17:55:36.153573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226973 ] 00:07:28.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.707 [2024-07-23 17:55:36.211659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.707 [2024-07-23 17:55:36.301424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.707 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.708 17:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:30.080 17:55:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.080 00:07:30.080 real 0m1.371s 00:07:30.080 user 0m1.238s 00:07:30.080 sys 0m0.134s 00:07:30.080 17:55:37 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.080 17:55:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 ************************************ 00:07:30.080 END TEST accel_copy 00:07:30.080 ************************************ 00:07:30.080 17:55:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.080 17:55:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.080 17:55:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:30.080 17:55:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.080 17:55:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.080 ************************************ 00:07:30.080 START TEST accel_fill 00:07:30.080 ************************************ 00:07:30.080 17:55:37 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:30.080 17:55:37 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:30.080 [2024-07-23 17:55:37.569386] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:30.080 [2024-07-23 17:55:37.569444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227130 ] 00:07:30.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.080 [2024-07-23 17:55:37.625369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.080 [2024-07-23 17:55:37.708044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.338 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.339 17:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:31.273 17:55:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.273 00:07:31.273 real 0m1.373s 00:07:31.273 user 0m1.243s 00:07:31.273 sys 0m0.132s 00:07:31.273 17:55:38 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.273 17:55:38 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:31.273 ************************************ 00:07:31.273 END TEST accel_fill 00:07:31.273 ************************************ 00:07:31.531 17:55:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.531 17:55:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:31.531 17:55:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.531 17:55:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.531 17:55:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 ************************************ 00:07:31.531 START TEST accel_copy_crc32c 00:07:31.531 ************************************ 00:07:31.531 17:55:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:31.532 17:55:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:31.532 [2024-07-23 17:55:38.996079] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:31.532 [2024-07-23 17:55:38.996143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227287 ] 00:07:31.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.532 [2024-07-23 17:55:39.053590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.532 [2024-07-23 17:55:39.134203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.532 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.789 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:31.790 17:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.723 00:07:32.723 real 0m1.371s 00:07:32.723 user 0m1.237s 00:07:32.723 sys 0m0.135s 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.723 17:55:40 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:32.723 ************************************ 00:07:32.723 END TEST accel_copy_crc32c 00:07:32.723 ************************************ 00:07:32.723 17:55:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.723 17:55:40 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:32.723 17:55:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:32.723 17:55:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.723 17:55:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.981 ************************************ 00:07:32.981 START TEST accel_copy_crc32c_C2 00:07:32.981 ************************************ 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:32.981 [2024-07-23 17:55:40.414390] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:32.981 [2024-07-23 17:55:40.414455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227444 ] 00:07:32.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.981 [2024-07-23 17:55:40.473908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.981 [2024-07-23 17:55:40.557838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:32.981 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:32.982 17:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.354 00:07:34.354 real 0m1.370s 00:07:34.354 user 0m1.236s 00:07:34.354 sys 0m0.136s 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.354 17:55:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:34.354 ************************************ 00:07:34.354 END TEST accel_copy_crc32c_C2 00:07:34.354 ************************************ 00:07:34.354 17:55:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.354 17:55:41 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:34.354 17:55:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.354 17:55:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.354 17:55:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.354 ************************************ 00:07:34.354 START TEST accel_dualcast 00:07:34.354 ************************************ 00:07:34.354 17:55:41 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:34.354 17:55:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:34.354 [2024-07-23 17:55:41.834146] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:34.354 [2024-07-23 17:55:41.834211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227711 ] 00:07:34.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.354 [2024-07-23 17:55:41.892016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.354 [2024-07-23 17:55:41.972518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.613 17:55:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:35.547 17:55:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.547 00:07:35.547 real 0m1.356s 00:07:35.547 user 0m1.227s 00:07:35.547 sys 0m0.131s 00:07:35.547 17:55:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.547 17:55:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:35.547 ************************************ 00:07:35.547 END TEST accel_dualcast 00:07:35.547 ************************************ 00:07:35.547 17:55:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.547 17:55:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:35.547 17:55:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.547 17:55:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.547 17:55:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.805 ************************************ 00:07:35.805 START TEST accel_compare 00:07:35.805 ************************************ 00:07:35.805 17:55:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:35.805 [2024-07-23 17:55:43.237944] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:35.805 [2024-07-23 17:55:43.238015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227873 ] 00:07:35.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.805 [2024-07-23 17:55:43.295385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.805 [2024-07-23 17:55:43.379562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.805 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.806 17:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:37.177 17:55:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.177 00:07:37.177 real 0m1.379s 00:07:37.177 user 0m1.250s 00:07:37.177 sys 0m0.130s 00:07:37.177 17:55:44 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.177 17:55:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:37.177 ************************************ 00:07:37.177 END TEST accel_compare 00:07:37.177 ************************************ 00:07:37.177 17:55:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.177 17:55:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:37.177 17:55:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.177 17:55:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.177 17:55:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.177 ************************************ 00:07:37.177 START TEST accel_xor 00:07:37.177 ************************************ 00:07:37.177 17:55:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:37.177 17:55:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:37.177 [2024-07-23 17:55:44.659921] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:37.177 [2024-07-23 17:55:44.659987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228026 ] 00:07:37.177 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.177 [2024-07-23 17:55:44.717869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.177 [2024-07-23 17:55:44.801122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.436 17:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:38.369 17:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.369 00:07:38.369 real 0m1.381s 00:07:38.369 user 0m1.246s 00:07:38.369 sys 0m0.137s 00:07:38.369 17:55:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.369 17:55:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:38.369 ************************************ 00:07:38.369 END TEST accel_xor 00:07:38.369 ************************************ 00:07:38.628 17:55:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.628 17:55:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:38.628 17:55:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.628 17:55:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.628 17:55:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.628 ************************************ 00:07:38.628 START TEST accel_xor 00:07:38.628 ************************************ 00:07:38.628 17:55:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:38.628 [2024-07-23 17:55:46.087543] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:38.628 [2024-07-23 17:55:46.087609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228185 ] 00:07:38.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.628 [2024-07-23 17:55:46.145039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.628 [2024-07-23 17:55:46.227387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.628 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.629 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.629 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.629 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.629 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.629 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.886 17:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:39.820 17:55:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.820 00:07:39.820 real 0m1.371s 00:07:39.820 user 0m1.241s 00:07:39.820 sys 0m0.132s 00:07:39.820 17:55:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.820 17:55:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 ************************************ 00:07:39.820 END TEST accel_xor 00:07:39.820 ************************************ 00:07:39.820 17:55:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.820 17:55:47 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:39.820 17:55:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:39.820 17:55:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.820 17:55:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.079 ************************************ 00:07:40.079 START TEST accel_dif_verify 00:07:40.079 ************************************ 00:07:40.079 17:55:47 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:40.079 [2024-07-23 17:55:47.503808] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:40.079 [2024-07-23 17:55:47.503882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228439 ] 00:07:40.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.079 [2024-07-23 17:55:47.561557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.079 [2024-07-23 17:55:47.644200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.079 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.080 17:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:41.453 17:55:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.453 00:07:41.453 real 0m1.372s 00:07:41.453 user 0m1.255s 00:07:41.453 sys 0m0.120s 00:07:41.453 17:55:48 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.453 17:55:48 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:41.453 ************************************ 00:07:41.453 END TEST accel_dif_verify 00:07:41.453 ************************************ 00:07:41.453 17:55:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.453 17:55:48 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:41.453 17:55:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:41.453 17:55:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.453 17:55:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.453 ************************************ 00:07:41.454 START TEST accel_dif_generate 00:07:41.454 ************************************ 00:07:41.454 17:55:48 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:41.454 17:55:48 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:41.454 [2024-07-23 17:55:48.925922] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:41.454 [2024-07-23 17:55:48.925990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228612 ] 00:07:41.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.454 [2024-07-23 17:55:48.984980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.454 [2024-07-23 17:55:49.067609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.711 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.712 17:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:42.715 17:55:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.715 00:07:42.715 real 0m1.382s 00:07:42.715 user 0m1.251s 00:07:42.715 sys 0m0.135s 00:07:42.715 17:55:50 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.715 17:55:50 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:42.715 ************************************ 00:07:42.715 END TEST accel_dif_generate 00:07:42.715 ************************************ 00:07:42.715 17:55:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.715 17:55:50 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:42.715 17:55:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:42.715 17:55:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.715 17:55:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.715 ************************************ 00:07:42.715 START TEST accel_dif_generate_copy 00:07:42.715 ************************************ 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:42.715 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:42.715 [2024-07-23 17:55:50.356441] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:42.715 [2024-07-23 17:55:50.356507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228773 ] 00:07:42.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.973 [2024-07-23 17:55:50.413856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.973 [2024-07-23 17:55:50.497724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.973 17:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.343 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.344 00:07:44.344 real 0m1.381s 00:07:44.344 user 0m1.245s 00:07:44.344 sys 0m0.138s 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.344 17:55:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 ************************************ 00:07:44.344 END TEST accel_dif_generate_copy 00:07:44.344 ************************************ 00:07:44.344 17:55:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.344 17:55:51 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:44.344 17:55:51 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.344 17:55:51 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:44.344 17:55:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.344 17:55:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 ************************************ 00:07:44.344 START TEST accel_comp 00:07:44.344 ************************************ 00:07:44.344 17:55:51 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:44.344 [2024-07-23 17:55:51.782666] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:44.344 [2024-07-23 17:55:51.782733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228926 ] 00:07:44.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.344 [2024-07-23 17:55:51.839468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.344 [2024-07-23 17:55:51.922339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.344 17:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:45.714 17:55:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.714 00:07:45.714 real 0m1.377s 00:07:45.714 user 0m1.251s 00:07:45.714 sys 0m0.129s 00:07:45.714 17:55:53 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.714 17:55:53 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:45.714 ************************************ 00:07:45.714 END TEST accel_comp 00:07:45.714 ************************************ 00:07:45.715 17:55:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.715 17:55:53 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.715 17:55:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.715 17:55:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.715 17:55:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.715 ************************************ 00:07:45.715 START TEST accel_decomp 00:07:45.715 ************************************ 00:07:45.715 17:55:53 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:45.715 17:55:53 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:45.715 [2024-07-23 17:55:53.203048] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:45.715 [2024-07-23 17:55:53.203111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229186 ] 00:07:45.715 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.715 [2024-07-23 17:55:53.260671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.715 [2024-07-23 17:55:53.343305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.972 17:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.903 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.161 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.161 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.161 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.162 17:55:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.162 00:07:47.162 real 0m1.380s 00:07:47.162 user 0m1.244s 00:07:47.162 sys 0m0.139s 00:07:47.162 17:55:54 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.162 17:55:54 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:47.162 ************************************ 00:07:47.162 END TEST accel_decomp 00:07:47.162 ************************************ 00:07:47.162 17:55:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.162 17:55:54 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.162 17:55:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:47.162 17:55:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.162 17:55:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.162 ************************************ 00:07:47.162 START TEST accel_decomp_full 00:07:47.162 ************************************ 00:07:47.162 17:55:54 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:47.162 17:55:54 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:47.162 [2024-07-23 17:55:54.629178] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:47.162 [2024-07-23 17:55:54.629241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229358 ] 00:07:47.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.162 [2024-07-23 17:55:54.685377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.162 [2024-07-23 17:55:54.767808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.420 17:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.353 17:55:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.353 17:55:56 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.353 00:07:48.353 real 0m1.389s 00:07:48.353 user 0m1.255s 00:07:48.353 sys 0m0.136s 00:07:48.353 17:55:56 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.353 17:55:56 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:48.353 ************************************ 00:07:48.353 END TEST accel_decomp_full 00:07:48.353 ************************************ 00:07:48.611 17:55:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.611 17:55:56 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.611 17:55:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:48.611 17:55:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.611 17:55:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.611 ************************************ 00:07:48.611 START TEST accel_decomp_mcore 00:07:48.611 ************************************ 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:48.611 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:48.611 [2024-07-23 17:55:56.064977] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:48.611 [2024-07-23 17:55:56.065047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229518 ] 00:07:48.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.611 [2024-07-23 17:55:56.122703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.611 [2024-07-23 17:55:56.208655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.611 [2024-07-23 17:55:56.208699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.611 [2024-07-23 17:55:56.208757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.611 [2024-07-23 17:55:56.208760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.869 17:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.803 00:07:49.803 real 0m1.397s 00:07:49.803 user 0m4.713s 00:07:49.803 sys 0m0.142s 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.803 17:55:57 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:49.803 ************************************ 00:07:49.803 END TEST accel_decomp_mcore 00:07:49.803 ************************************ 00:07:50.061 17:55:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.062 17:55:57 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.062 17:55:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:50.062 17:55:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.062 17:55:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.062 ************************************ 00:07:50.062 START TEST accel_decomp_full_mcore 00:07:50.062 ************************************ 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:50.062 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:50.062 [2024-07-23 17:55:57.512663] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:50.062 [2024-07-23 17:55:57.512729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229672 ] 00:07:50.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.062 [2024-07-23 17:55:57.571530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.062 [2024-07-23 17:55:57.668551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.062 [2024-07-23 17:55:57.668619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.062 [2024-07-23 17:55:57.668677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.062 [2024-07-23 17:55:57.668681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.320 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.321 17:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.253 00:07:51.253 real 0m1.410s 00:07:51.253 user 0m4.714s 00:07:51.253 sys 0m0.160s 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.253 17:55:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:51.253 ************************************ 00:07:51.253 END TEST accel_decomp_full_mcore 00:07:51.253 ************************************ 00:07:51.512 17:55:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.512 17:55:58 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.512 17:55:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:51.512 17:55:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.512 17:55:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.512 ************************************ 00:07:51.512 START TEST accel_decomp_mthread 00:07:51.512 ************************************ 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:51.512 17:55:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:51.512 [2024-07-23 17:55:58.971806] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:51.512 [2024-07-23 17:55:58.971874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229948 ] 00:07:51.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.512 [2024-07-23 17:55:59.028092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.512 [2024-07-23 17:55:59.110488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.513 17:55:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.886 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.887 00:07:52.887 real 0m1.379s 00:07:52.887 user 0m1.253s 00:07:52.887 sys 0m0.129s 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.887 17:56:00 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:52.887 ************************************ 00:07:52.887 END TEST accel_decomp_mthread 00:07:52.887 ************************************ 00:07:52.887 17:56:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.887 17:56:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.887 17:56:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:52.887 17:56:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.887 17:56:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.887 ************************************ 00:07:52.887 START TEST accel_decomp_full_mthread 00:07:52.887 ************************************ 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:52.887 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:52.887 [2024-07-23 17:56:00.398911] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:52.887 [2024-07-23 17:56:00.398975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230110 ] 00:07:52.887 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.887 [2024-07-23 17:56:00.457054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.887 [2024-07-23 17:56:00.539758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.145 17:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.515 00:07:54.515 real 0m1.409s 00:07:54.515 user 0m1.276s 00:07:54.515 sys 0m0.137s 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.515 17:56:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:54.515 ************************************ 00:07:54.515 END TEST accel_decomp_full_mthread 00:07:54.515 ************************************ 00:07:54.515 17:56:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.515 17:56:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:54.515 17:56:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.515 17:56:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:54.515 17:56:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.515 17:56:01 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:54.515 17:56:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.515 17:56:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.515 17:56:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.515 17:56:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.515 17:56:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.515 17:56:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.515 17:56:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:54.515 17:56:01 accel -- accel/accel.sh@41 -- # jq -r . 00:07:54.515 ************************************ 00:07:54.515 START TEST accel_dif_functional_tests 00:07:54.515 ************************************ 00:07:54.515 17:56:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:54.515 [2024-07-23 17:56:01.871016] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:54.515 [2024-07-23 17:56:01.871078] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230264 ] 00:07:54.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.515 [2024-07-23 17:56:01.926444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.515 [2024-07-23 17:56:02.011266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.515 [2024-07-23 17:56:02.011458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.515 [2024-07-23 17:56:02.011474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.515 00:07:54.515 00:07:54.515 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.515 http://cunit.sourceforge.net/ 00:07:54.515 00:07:54.515 00:07:54.515 Suite: accel_dif 00:07:54.515 Test: verify: DIF generated, GUARD check ...passed 00:07:54.515 Test: verify: DIF generated, APPTAG check ...passed 00:07:54.515 Test: verify: DIF generated, REFTAG check ...passed 00:07:54.515 Test: verify: DIF not generated, GUARD check ...[2024-07-23 17:56:02.101183] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.515 passed 00:07:54.515 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 17:56:02.101262] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.515 passed 00:07:54.515 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 17:56:02.101295] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.515 passed 00:07:54.515 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:54.515 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 17:56:02.101397] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:54.515 passed 00:07:54.515 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:54.515 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:54.515 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:54.515 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 17:56:02.101538] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:54.515 passed 00:07:54.515 Test: verify copy: DIF generated, GUARD check ...passed 00:07:54.515 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:54.515 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:54.515 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 17:56:02.101715] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.515 passed 00:07:54.515 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 17:56:02.101755] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.515 passed 00:07:54.516 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 17:56:02.101790] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.516 passed 00:07:54.516 Test: generate copy: DIF generated, GUARD check ...passed 00:07:54.516 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:54.516 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:54.516 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:54.516 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:54.516 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:54.516 Test: generate copy: iovecs-len validate ...[2024-07-23 17:56:02.102053] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:54.516 passed 00:07:54.516 Test: generate copy: buffer alignment validate ...passed 00:07:54.516 00:07:54.516 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.516 suites 1 1 n/a 0 0 00:07:54.516 tests 26 26 26 0 0 00:07:54.516 asserts 115 115 115 0 n/a 00:07:54.516 00:07:54.516 Elapsed time = 0.003 seconds 00:07:54.774 00:07:54.774 real 0m0.464s 00:07:54.774 user 0m0.722s 00:07:54.774 sys 0m0.175s 00:07:54.774 17:56:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.774 17:56:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:54.774 ************************************ 00:07:54.774 END TEST accel_dif_functional_tests 00:07:54.774 ************************************ 00:07:54.774 17:56:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.774 00:07:54.774 real 0m31.053s 00:07:54.774 user 0m34.709s 00:07:54.774 sys 0m4.319s 00:07:54.774 17:56:02 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.774 17:56:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.774 ************************************ 00:07:54.774 END TEST accel 00:07:54.774 ************************************ 00:07:54.774 17:56:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.774 17:56:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.774 17:56:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.774 17:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.774 17:56:02 -- common/autotest_common.sh@10 -- # set +x 00:07:54.774 ************************************ 00:07:54.774 START TEST accel_rpc 00:07:54.774 ************************************ 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.774 * Looking for test storage... 00:07:54.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:54.774 17:56:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.774 17:56:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2230452 00:07:54.774 17:56:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.774 17:56:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2230452 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2230452 ']' 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.774 17:56:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.032 [2024-07-23 17:56:02.474803] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:55.032 [2024-07-23 17:56:02.474898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230452 ] 00:07:55.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.032 [2024-07-23 17:56:02.531713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.032 [2024-07-23 17:56:02.620568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.032 17:56:02 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.032 17:56:02 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:55.032 17:56:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.032 17:56:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.032 17:56:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.032 17:56:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.032 17:56:02 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.033 17:56:02 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.033 17:56:02 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.033 17:56:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.291 ************************************ 00:07:55.291 START TEST accel_assign_opcode 00:07:55.291 ************************************ 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.291 [2024-07-23 17:56:02.709236] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.291 [2024-07-23 17:56:02.717248] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.291 17:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.549 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.549 software 00:07:55.549 00:07:55.549 real 0m0.275s 00:07:55.549 user 0m0.040s 00:07:55.549 sys 0m0.007s 00:07:55.549 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.549 17:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.549 ************************************ 00:07:55.549 END TEST accel_assign_opcode 00:07:55.549 ************************************ 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:55.549 17:56:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2230452 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2230452 ']' 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2230452 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230452 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230452' 00:07:55.549 killing process with pid 2230452 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@967 -- # kill 2230452 00:07:55.549 17:56:03 accel_rpc -- common/autotest_common.sh@972 -- # wait 2230452 00:07:55.809 00:07:55.809 real 0m1.045s 00:07:55.809 user 0m0.996s 00:07:55.809 sys 0m0.412s 00:07:55.809 17:56:03 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.809 17:56:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.809 ************************************ 00:07:55.809 END TEST accel_rpc 00:07:55.809 ************************************ 00:07:55.809 17:56:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.809 17:56:03 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.809 17:56:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.809 17:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.809 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.809 ************************************ 00:07:55.809 START TEST app_cmdline 00:07:55.809 ************************************ 00:07:55.809 17:56:03 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:56.067 * Looking for test storage... 00:07:56.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.067 17:56:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.067 17:56:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2230656 00:07:56.067 17:56:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.067 17:56:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2230656 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2230656 ']' 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.067 17:56:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.067 [2024-07-23 17:56:03.569003] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:07:56.067 [2024-07-23 17:56:03.569100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230656 ] 00:07:56.067 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.067 [2024-07-23 17:56:03.625895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.067 [2024-07-23 17:56:03.710213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.324 17:56:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.324 17:56:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:56.324 17:56:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:56.582 { 00:07:56.582 "version": "SPDK v24.09-pre git sha1 b8378f94e", 00:07:56.582 "fields": { 00:07:56.582 "major": 24, 00:07:56.582 "minor": 9, 00:07:56.582 "patch": 0, 00:07:56.582 "suffix": "-pre", 00:07:56.582 "commit": "b8378f94e" 00:07:56.582 } 00:07:56.582 } 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.582 17:56:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.582 17:56:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.840 request: 00:07:56.840 { 00:07:56.840 "method": "env_dpdk_get_mem_stats", 00:07:56.840 "req_id": 1 00:07:56.840 } 00:07:56.840 Got JSON-RPC error response 00:07:56.840 response: 00:07:56.840 { 00:07:56.840 "code": -32601, 00:07:56.840 "message": "Method not found" 00:07:56.840 } 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.840 17:56:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2230656 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2230656 ']' 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2230656 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.840 17:56:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230656 00:07:57.097 17:56:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.097 17:56:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.097 17:56:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230656' 00:07:57.097 killing process with pid 2230656 00:07:57.097 17:56:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 2230656 00:07:57.097 17:56:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 2230656 00:07:57.355 00:07:57.355 real 0m1.442s 00:07:57.355 user 0m1.792s 00:07:57.355 sys 0m0.440s 00:07:57.355 17:56:04 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.355 17:56:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:57.355 ************************************ 00:07:57.355 END TEST app_cmdline 00:07:57.355 ************************************ 00:07:57.355 17:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:57.355 17:56:04 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.355 17:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.355 17:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.355 17:56:04 -- common/autotest_common.sh@10 -- # set +x 00:07:57.355 ************************************ 00:07:57.355 START TEST version 00:07:57.355 ************************************ 00:07:57.355 17:56:04 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:57.355 * Looking for test storage... 00:07:57.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:57.355 17:56:05 version -- app/version.sh@17 -- # get_header_version major 00:07:57.614 17:56:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # cut -f2 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.614 17:56:05 version -- app/version.sh@17 -- # major=24 00:07:57.614 17:56:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:57.614 17:56:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # cut -f2 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.614 17:56:05 version -- app/version.sh@18 -- # minor=9 00:07:57.614 17:56:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:57.614 17:56:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # cut -f2 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.614 17:56:05 version -- app/version.sh@19 -- # patch=0 00:07:57.614 17:56:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:57.614 17:56:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # cut -f2 00:07:57.614 17:56:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:57.614 17:56:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:57.614 17:56:05 version -- app/version.sh@22 -- # version=24.9 00:07:57.614 17:56:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:57.614 17:56:05 version -- app/version.sh@28 -- # version=24.9rc0 00:07:57.614 17:56:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:57.614 17:56:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:57.614 17:56:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:57.614 17:56:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:57.614 00:07:57.614 real 0m0.105s 00:07:57.614 user 0m0.062s 00:07:57.614 sys 0m0.063s 00:07:57.614 17:56:05 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.614 17:56:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 ************************************ 00:07:57.614 END TEST version 00:07:57.614 ************************************ 00:07:57.614 17:56:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:57.614 17:56:05 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@198 -- # uname -s 00:07:57.614 17:56:05 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:57.614 17:56:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:57.614 17:56:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:57.614 17:56:05 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:57.614 17:56:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.614 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 17:56:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:57.614 17:56:05 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:57.614 17:56:05 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.614 17:56:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.614 17:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.614 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 ************************************ 00:07:57.614 START TEST nvmf_tcp 00:07:57.614 ************************************ 00:07:57.614 17:56:05 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.614 * Looking for test storage... 00:07:57.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.614 17:56:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:57.614 17:56:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.614 17:56:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:57.614 17:56:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.614 17:56:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.614 17:56:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 ************************************ 00:07:57.614 START TEST nvmf_target_core 00:07:57.614 ************************************ 00:07:57.614 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:57.614 * Looking for test storage... 00:07:57.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.614 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:57.614 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.614 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.614 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.873 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.874 ************************************ 00:07:57.874 START TEST nvmf_abort 00:07:57.874 ************************************ 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:57.874 * Looking for test storage... 00:07:57.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.874 17:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.779 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.780 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.780 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.780 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.780 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.780 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:00.039 00:08:00.039 --- 10.0.0.2 ping statistics --- 00:08:00.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.039 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:00.039 00:08:00.039 --- 10.0.0.1 ping statistics --- 00:08:00.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.039 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2232587 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2232587 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2232587 ']' 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.039 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 [2024-07-23 17:56:07.610293] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:08:00.039 [2024-07-23 17:56:07.610386] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.039 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.039 [2024-07-23 17:56:07.671701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.298 [2024-07-23 17:56:07.754088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.298 [2024-07-23 17:56:07.754144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.298 [2024-07-23 17:56:07.754172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.298 [2024-07-23 17:56:07.754183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.298 [2024-07-23 17:56:07.754192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.298 [2024-07-23 17:56:07.754278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.298 [2024-07-23 17:56:07.754345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.298 [2024-07-23 17:56:07.754349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 [2024-07-23 17:56:07.892469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 Malloc0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 Delay0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.298 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 [2024-07-23 17:56:07.956282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.556 17:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:00.556 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.556 [2024-07-23 17:56:08.094395] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:02.490 Initializing NVMe Controllers 00:08:02.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:02.490 controller IO queue size 128 less than required 00:08:02.490 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:02.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:02.490 Initialization complete. Launching workers. 00:08:02.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30479 00:08:02.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30540, failed to submit 62 00:08:02.490 success 30483, unsuccess 57, failed 0 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.490 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.490 rmmod nvme_tcp 00:08:02.747 rmmod nvme_fabrics 00:08:02.747 rmmod nvme_keyring 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2232587 ']' 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2232587 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2232587 ']' 00:08:02.747 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2232587 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2232587 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2232587' 00:08:02.748 killing process with pid 2232587 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2232587 00:08:02.748 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2232587 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.007 17:56:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.912 00:08:04.912 real 0m7.202s 00:08:04.912 user 0m10.329s 00:08:04.912 sys 0m2.488s 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.912 ************************************ 00:08:04.912 END TEST nvmf_abort 00:08:04.912 ************************************ 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.912 17:56:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.913 ************************************ 00:08:04.913 START TEST nvmf_ns_hotplug_stress 00:08:04.913 ************************************ 00:08:04.913 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:05.171 * Looking for test storage... 00:08:05.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.171 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.172 17:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:07.703 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:07.703 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:07.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:07.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.703 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:08:07.703 00:08:07.703 --- 10.0.0.2 ping statistics --- 00:08:07.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.704 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:08:07.704 00:08:07.704 --- 10.0.0.1 ping statistics --- 00:08:07.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.704 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2234935 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2234935 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2234935 ']' 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.704 17:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.704 [2024-07-23 17:56:14.970160] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:08:07.704 [2024-07-23 17:56:14.970247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.704 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.704 [2024-07-23 17:56:15.034458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.704 [2024-07-23 17:56:15.123414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.704 [2024-07-23 17:56:15.123476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.704 [2024-07-23 17:56:15.123498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.704 [2024-07-23 17:56:15.123510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.704 [2024-07-23 17:56:15.123520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.704 [2024-07-23 17:56:15.123613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.704 [2024-07-23 17:56:15.123676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.704 [2024-07-23 17:56:15.123678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:07.704 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.961 [2024-07-23 17:56:15.497398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.961 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:08.218 17:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.475 [2024-07-23 17:56:16.007987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.475 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.733 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:08.990 Malloc0 00:08:08.990 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:09.248 Delay0 00:08:09.248 17:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.505 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:09.763 NULL1 00:08:09.763 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:10.020 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2235235 00:08:10.020 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:10.020 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:10.020 17:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.020 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.393 Read completed with error (sct=0, sc=11) 00:08:11.393 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.393 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:11.393 17:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:11.650 true 00:08:11.650 17:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:11.650 17:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.582 17:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.583 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:12.583 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:12.840 true 00:08:12.840 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:12.840 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.098 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.355 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:13.355 17:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:13.612 true 00:08:13.612 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:13.612 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.545 17:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.803 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:14.804 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:15.061 true 00:08:15.061 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:15.061 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.319 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.319 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:15.319 17:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:15.576 true 00:08:15.576 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:15.576 17:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.508 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.765 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:16.765 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:17.022 true 00:08:17.022 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:17.022 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.280 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.537 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:17.538 17:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:17.795 true 00:08:17.795 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:17.795 17:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.360 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.617 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:18.617 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:18.875 true 00:08:18.875 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:18.875 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.133 17:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.390 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:19.390 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:19.647 true 00:08:19.647 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:19.647 17:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.611 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.869 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:20.869 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:21.126 true 00:08:21.126 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:21.126 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.384 17:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.642 17:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:21.642 17:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:21.900 true 00:08:21.900 17:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:21.900 17:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.833 17:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.090 17:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:23.090 17:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:23.348 true 00:08:23.348 17:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:23.348 17:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.605 17:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.863 17:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:23.863 17:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:24.121 true 00:08:24.121 17:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:24.121 17:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.054 17:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.312 17:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:25.312 17:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:25.569 true 00:08:25.569 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:25.569 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.827 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.084 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:26.084 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:26.341 true 00:08:26.341 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:26.341 17:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.274 17:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.274 17:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:27.274 17:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:27.531 true 00:08:27.531 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:27.531 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.788 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.046 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:28.046 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:28.303 true 00:08:28.303 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:28.303 17:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.234 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.234 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:29.234 17:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:29.491 true 00:08:29.491 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:29.492 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.749 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.006 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:30.006 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:30.263 true 00:08:30.263 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:30.263 17:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.194 17:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.453 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:31.453 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:31.710 true 00:08:31.710 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:31.710 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.968 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.225 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:32.225 17:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:32.483 true 00:08:32.483 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:32.484 17:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.855 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.855 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:33.855 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:34.112 true 00:08:34.112 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:34.112 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.370 17:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.627 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:34.628 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:34.885 true 00:08:34.885 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:34.885 17:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.450 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.708 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:35.708 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:35.964 true 00:08:35.964 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:35.964 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.221 17:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.478 17:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:36.478 17:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:36.735 true 00:08:36.735 17:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:36.735 17:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.711 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.969 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:37.969 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:38.227 true 00:08:38.227 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:38.227 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.484 17:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.742 17:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:38.742 17:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:38.999 true 00:08:38.999 17:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:38.999 17:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.932 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.932 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:39.932 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:40.190 Initializing NVMe Controllers 00:08:40.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.190 Controller IO queue size 128, less than required. 00:08:40.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.190 Controller IO queue size 128, less than required. 00:08:40.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:40.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:40.190 Initialization complete. Launching workers. 00:08:40.190 ======================================================== 00:08:40.190 Latency(us) 00:08:40.190 Device Information : IOPS MiB/s Average min max 00:08:40.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 656.58 0.32 108186.06 2237.28 1031835.22 00:08:40.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11786.72 5.76 10859.31 3969.36 541866.85 00:08:40.190 ======================================================== 00:08:40.190 Total : 12443.30 6.08 15994.81 2237.28 1031835.22 00:08:40.190 00:08:40.190 true 00:08:40.190 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2235235 00:08:40.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2235235) - No such process 00:08:40.190 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2235235 00:08:40.190 17:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.447 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.704 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:40.704 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:40.704 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:40.704 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:40.704 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:40.962 null0 00:08:40.962 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:40.962 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:40.962 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:41.220 null1 00:08:41.220 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:41.220 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.220 17:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:41.477 null2 00:08:41.477 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:41.477 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.477 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:41.735 null3 00:08:41.735 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:41.735 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.735 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:41.992 null4 00:08:41.992 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:41.992 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.992 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:42.250 null5 00:08:42.250 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.250 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.250 17:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:42.508 null6 00:08:42.508 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.508 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.508 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:42.766 null7 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2239189 2239190 2239192 2239194 2239197 2239200 2239203 2239206 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.766 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.024 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.282 17:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.540 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.797 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.055 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:44.313 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.571 17:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:44.829 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.087 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.345 17:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.604 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.861 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.118 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.376 17:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.634 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.892 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.893 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.893 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.151 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.409 17:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.667 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.668 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.925 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.184 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.184 rmmod nvme_tcp 00:08:48.442 rmmod nvme_fabrics 00:08:48.442 rmmod nvme_keyring 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2234935 ']' 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2234935 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2234935 ']' 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2234935 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2234935 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2234935' 00:08:48.442 killing process with pid 2234935 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2234935 00:08:48.442 17:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2234935 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.702 17:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.610 00:08:50.610 real 0m45.643s 00:08:50.610 user 3m27.524s 00:08:50.610 sys 0m16.176s 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.610 ************************************ 00:08:50.610 END TEST nvmf_ns_hotplug_stress 00:08:50.610 ************************************ 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.610 ************************************ 00:08:50.610 START TEST nvmf_delete_subsystem 00:08:50.610 ************************************ 00:08:50.610 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:50.870 * Looking for test storage... 00:08:50.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.870 17:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.779 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:53.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:53.037 00:08:53.037 --- 10.0.0.2 ping statistics --- 00:08:53.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.037 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:08:53.037 00:08:53.037 --- 10.0.0.1 ping statistics --- 00:08:53.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.037 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2242069 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2242069 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2242069 ']' 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.037 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 [2024-07-23 17:57:00.545618] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:08:53.037 [2024-07-23 17:57:00.545733] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.037 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.037 [2024-07-23 17:57:00.611744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.295 [2024-07-23 17:57:00.703185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.295 [2024-07-23 17:57:00.703243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.295 [2024-07-23 17:57:00.703256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.295 [2024-07-23 17:57:00.703267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.295 [2024-07-23 17:57:00.703277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.295 [2024-07-23 17:57:00.703383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.295 [2024-07-23 17:57:00.703388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 [2024-07-23 17:57:00.838124] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 [2024-07-23 17:57:00.854407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 NULL1 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 Delay0 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2242132 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:53.295 17:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.295 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.295 [2024-07-23 17:57:00.929050] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:55.822 17:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.822 17:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.822 17:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 [2024-07-23 17:57:03.024830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712010 is same with the state(5) to be set 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 starting I/O failed: -6 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Write completed with error (sct=0, sc=8) 00:08:55.822 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 starting I/O failed: -6 00:08:55.823 [2024-07-23 17:57:03.025496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f411400d490 is same with the state(5) to be set 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Write completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:55.823 Read completed with error (sct=0, sc=8) 00:08:56.418 [2024-07-23 17:57:03.984038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172e630 is same with the state(5) to be set 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 [2024-07-23 17:57:04.023224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711ce0 is same with the state(5) to be set 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 [2024-07-23 17:57:04.023466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716d40 is same with the state(5) to be set 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 [2024-07-23 17:57:04.027850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f411400d000 is same with the state(5) to be set 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Write completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 Read completed with error (sct=0, sc=8) 00:08:56.418 [2024-07-23 17:57:04.028024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f411400d7c0 is same with the state(5) to be set 00:08:56.418 Initializing NVMe Controllers 00:08:56.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.418 Controller IO queue size 128, less than required. 00:08:56.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.418 Initialization complete. Launching workers. 00:08:56.418 ======================================================== 00:08:56.418 Latency(us) 00:08:56.418 Device Information : IOPS MiB/s Average min max 00:08:56.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153.38 0.07 934534.90 574.19 1011439.91 00:08:56.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.83 0.08 947334.86 357.28 2004471.85 00:08:56.418 ======================================================== 00:08:56.418 Total : 314.21 0.15 941086.54 357.28 2004471.85 00:08:56.418 00:08:56.418 [2024-07-23 17:57:04.028878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172e630 (9): Bad file descriptor 00:08:56.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:56.418 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.418 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:56.418 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2242132 00:08:56.418 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2242132 00:08:56.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2242132) - No such process 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2242132 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2242132 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2242132 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.984 [2024-07-23 17:57:04.549899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2242599 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:56.984 17:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.984 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.984 [2024-07-23 17:57:04.606990] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.549 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.549 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:57.549 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.113 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.113 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:58.113 17:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.677 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.677 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:58.677 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.935 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.935 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:58.935 17:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.499 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.499 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:08:59.499 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.063 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.063 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:09:00.063 17:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.320 Initializing NVMe Controllers 00:09:00.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.320 Controller IO queue size 128, less than required. 00:09:00.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:00.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:00.320 Initialization complete. Launching workers. 00:09:00.320 ======================================================== 00:09:00.320 Latency(us) 00:09:00.320 Device Information : IOPS MiB/s Average min max 00:09:00.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003583.66 1000184.60 1011452.76 00:09:00.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004669.82 1000171.46 1012345.00 00:09:00.320 ======================================================== 00:09:00.320 Total : 256.00 0.12 1004126.74 1000171.46 1012345.00 00:09:00.320 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2242599 00:09:00.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2242599) - No such process 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2242599 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.578 rmmod nvme_tcp 00:09:00.578 rmmod nvme_fabrics 00:09:00.578 rmmod nvme_keyring 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2242069 ']' 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2242069 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2242069 ']' 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2242069 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242069 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242069' 00:09:00.578 killing process with pid 2242069 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2242069 00:09:00.578 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2242069 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.836 17:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.368 00:09:03.368 real 0m12.171s 00:09:03.368 user 0m27.589s 00:09:03.368 sys 0m2.899s 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 ************************************ 00:09:03.368 END TEST nvmf_delete_subsystem 00:09:03.368 ************************************ 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 ************************************ 00:09:03.368 START TEST nvmf_host_management 00:09:03.368 ************************************ 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.368 * Looking for test storage... 00:09:03.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.368 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.369 17:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.274 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:09:05.275 00:09:05.275 --- 10.0.0.2 ping statistics --- 00:09:05.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.275 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:05.275 00:09:05.275 --- 10.0.0.1 ping statistics --- 00:09:05.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.275 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2245446 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2245446 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2245446 ']' 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:05.275 17:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.275 [2024-07-23 17:57:12.804683] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:05.275 [2024-07-23 17:57:12.804780] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.275 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.275 [2024-07-23 17:57:12.873730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.534 [2024-07-23 17:57:12.966092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.534 [2024-07-23 17:57:12.966164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.534 [2024-07-23 17:57:12.966187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.534 [2024-07-23 17:57:12.966197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.534 [2024-07-23 17:57:12.966206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.534 [2024-07-23 17:57:12.966304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.534 [2024-07-23 17:57:12.966364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.534 [2024-07-23 17:57:12.966431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:05.534 [2024-07-23 17:57:12.966434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 [2024-07-23 17:57:13.120847] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.534 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 Malloc0 00:09:05.534 [2024-07-23 17:57:13.185960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2245613 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2245613 /var/tmp/bdevperf.sock 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2245613 ']' 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.792 { 00:09:05.792 "params": { 00:09:05.792 "name": "Nvme$subsystem", 00:09:05.792 "trtype": "$TEST_TRANSPORT", 00:09:05.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.792 "adrfam": "ipv4", 00:09:05.792 "trsvcid": "$NVMF_PORT", 00:09:05.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.792 "hdgst": ${hdgst:-false}, 00:09:05.792 "ddgst": ${ddgst:-false} 00:09:05.792 }, 00:09:05.792 "method": "bdev_nvme_attach_controller" 00:09:05.792 } 00:09:05.792 EOF 00:09:05.792 )") 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:05.792 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.792 "params": { 00:09:05.792 "name": "Nvme0", 00:09:05.792 "trtype": "tcp", 00:09:05.793 "traddr": "10.0.0.2", 00:09:05.793 "adrfam": "ipv4", 00:09:05.793 "trsvcid": "4420", 00:09:05.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:05.793 "hdgst": false, 00:09:05.793 "ddgst": false 00:09:05.793 }, 00:09:05.793 "method": "bdev_nvme_attach_controller" 00:09:05.793 }' 00:09:05.793 [2024-07-23 17:57:13.265183] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:05.793 [2024-07-23 17:57:13.265271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245613 ] 00:09:05.793 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.793 [2024-07-23 17:57:13.327005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.793 [2024-07-23 17:57:13.413800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.050 Running I/O for 10 seconds... 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:06.050 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.308 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.568 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.568 [2024-07-23 17:57:13.988475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.988723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43e80 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.991275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.568 [2024-07-23 17:57:13.991337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.568 [2024-07-23 17:57:13.991373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.568 [2024-07-23 17:57:13.991401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.568 [2024-07-23 17:57:13.991427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226ded0 is same with the state(5) to be set 00:09:06.568 [2024-07-23 17:57:13.991866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.991948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.991978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.991993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.568 [2024-07-23 17:57:13.992306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.568 [2024-07-23 17:57:13.992331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.569 [2024-07-23 17:57:13.992916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.992978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.992994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:06.569 [2024-07-23 17:57:13.993086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.569 [2024-07-23 17:57:13.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 17:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.569 [2024-07-23 17:57:13.993327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.569 [2024-07-23 17:57:13.993430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.569 [2024-07-23 17:57:13.993444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.570 [2024-07-23 17:57:13.993867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.570 [2024-07-23 17:57:13.993953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x269fa90 was disconnected and freed. reset controller. 00:09:06.570 [2024-07-23 17:57:13.995068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:06.570 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:06.570 00:09:06.570 Latency(us) 00:09:06.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.570 Job: Nvme0n1 ended in about 0.40 seconds with error 00:09:06.570 Verification LBA range: start 0x0 length 0x400 00:09:06.570 Nvme0n1 : 0.40 1585.57 99.10 158.56 0.00 35645.19 3094.76 33787.45 00:09:06.570 =================================================================================================================== 00:09:06.570 Total : 1585.57 99.10 158.56 0.00 35645.19 3094.76 33787.45 00:09:06.570 [2024-07-23 17:57:13.996947] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.570 [2024-07-23 17:57:13.996984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226ded0 (9): Bad file descriptor 00:09:06.570 17:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.570 17:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:06.570 [2024-07-23 17:57:14.129489] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2245613 00:09:07.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2245613) - No such process 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:07.503 { 00:09:07.503 "params": { 00:09:07.503 "name": "Nvme$subsystem", 00:09:07.503 "trtype": "$TEST_TRANSPORT", 00:09:07.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.503 "adrfam": "ipv4", 00:09:07.503 "trsvcid": "$NVMF_PORT", 00:09:07.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.503 "hdgst": ${hdgst:-false}, 00:09:07.503 "ddgst": ${ddgst:-false} 00:09:07.503 }, 00:09:07.503 "method": "bdev_nvme_attach_controller" 00:09:07.503 } 00:09:07.503 EOF 00:09:07.503 )") 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:07.503 17:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:07.503 "params": { 00:09:07.503 "name": "Nvme0", 00:09:07.503 "trtype": "tcp", 00:09:07.503 "traddr": "10.0.0.2", 00:09:07.503 "adrfam": "ipv4", 00:09:07.503 "trsvcid": "4420", 00:09:07.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.503 "hdgst": false, 00:09:07.503 "ddgst": false 00:09:07.503 }, 00:09:07.503 "method": "bdev_nvme_attach_controller" 00:09:07.503 }' 00:09:07.503 [2024-07-23 17:57:15.049897] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:07.503 [2024-07-23 17:57:15.049983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245775 ] 00:09:07.503 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.503 [2024-07-23 17:57:15.113725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.761 [2024-07-23 17:57:15.200714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.019 Running I/O for 1 seconds... 00:09:08.952 00:09:08.952 Latency(us) 00:09:08.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.952 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.952 Verification LBA range: start 0x0 length 0x400 00:09:08.952 Nvme0n1 : 1.03 1608.85 100.55 0.00 0.00 39153.63 6505.05 34369.99 00:09:08.952 =================================================================================================================== 00:09:08.952 Total : 1608.85 100.55 0.00 0.00 39153.63 6505.05 34369.99 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.210 rmmod nvme_tcp 00:09:09.210 rmmod nvme_fabrics 00:09:09.210 rmmod nvme_keyring 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2245446 ']' 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2245446 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2245446 ']' 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2245446 00:09:09.210 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2245446 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2245446' 00:09:09.211 killing process with pid 2245446 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2245446 00:09:09.211 17:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2245446 00:09:09.470 [2024-07-23 17:57:17.066241] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.470 17:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:12.008 00:09:12.008 real 0m8.669s 00:09:12.008 user 0m19.520s 00:09:12.008 sys 0m2.721s 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 ************************************ 00:09:12.008 END TEST nvmf_host_management 00:09:12.008 ************************************ 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 ************************************ 00:09:12.008 START TEST nvmf_lvol 00:09:12.008 ************************************ 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:12.008 * Looking for test storage... 00:09:12.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.008 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.009 17:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:13.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:13.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:13.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:13.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.919 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:13.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:09:13.920 00:09:13.920 --- 10.0.0.2 ping statistics --- 00:09:13.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.920 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:09:13.920 00:09:13.920 --- 10.0.0.1 ping statistics --- 00:09:13.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.920 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2247973 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2247973 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2247973 ']' 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.920 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.920 [2024-07-23 17:57:21.520343] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:13.920 [2024-07-23 17:57:21.520444] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.920 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.177 [2024-07-23 17:57:21.586643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.177 [2024-07-23 17:57:21.667146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.177 [2024-07-23 17:57:21.667205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.178 [2024-07-23 17:57:21.667228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.178 [2024-07-23 17:57:21.667238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.178 [2024-07-23 17:57:21.667247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.178 [2024-07-23 17:57:21.667348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.178 [2024-07-23 17:57:21.667407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.178 [2024-07-23 17:57:21.667410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.178 17:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.437 [2024-07-23 17:57:22.050120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.437 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.695 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:14.695 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.278 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:15.278 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:15.278 17:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:15.550 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=995e1250-cd6f-407e-90d8-a38afacffa1a 00:09:15.550 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 995e1250-cd6f-407e-90d8-a38afacffa1a lvol 20 00:09:15.807 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7d3731bb-4a9f-4193-ba65-2a66e0d38074 00:09:15.807 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.063 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d3731bb-4a9f-4193-ba65-2a66e0d38074 00:09:16.321 17:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.578 [2024-07-23 17:57:24.113490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.578 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.836 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2248399 00:09:16.836 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:16.836 17:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:16.836 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.770 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7d3731bb-4a9f-4193-ba65-2a66e0d38074 MY_SNAPSHOT 00:09:18.028 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7b1483cd-7ea1-4005-ace6-1ba80ac0e1f4 00:09:18.028 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7d3731bb-4a9f-4193-ba65-2a66e0d38074 30 00:09:18.594 17:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7b1483cd-7ea1-4005-ace6-1ba80ac0e1f4 MY_CLONE 00:09:18.852 17:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3e0b25d8-cb6e-457d-bdda-5613a9bd8f3c 00:09:18.852 17:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3e0b25d8-cb6e-457d-bdda-5613a9bd8f3c 00:09:19.418 17:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2248399 00:09:27.528 Initializing NVMe Controllers 00:09:27.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:27.528 Controller IO queue size 128, less than required. 00:09:27.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:27.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:27.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:27.528 Initialization complete. Launching workers. 00:09:27.528 ======================================================== 00:09:27.528 Latency(us) 00:09:27.528 Device Information : IOPS MiB/s Average min max 00:09:27.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10666.40 41.67 12011.12 1529.63 90688.94 00:09:27.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10532.10 41.14 12163.67 2243.64 66537.04 00:09:27.528 ======================================================== 00:09:27.528 Total : 21198.50 82.81 12086.91 1529.63 90688.94 00:09:27.528 00:09:27.528 17:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:27.528 17:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d3731bb-4a9f-4193-ba65-2a66e0d38074 00:09:27.786 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 995e1250-cd6f-407e-90d8-a38afacffa1a 00:09:28.043 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:28.043 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.044 rmmod nvme_tcp 00:09:28.044 rmmod nvme_fabrics 00:09:28.044 rmmod nvme_keyring 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2247973 ']' 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2247973 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2247973 ']' 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2247973 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247973 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247973' 00:09:28.044 killing process with pid 2247973 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2247973 00:09:28.044 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2247973 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.303 17:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.843 00:09:30.843 real 0m18.699s 00:09:30.843 user 1m2.749s 00:09:30.843 sys 0m5.966s 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:30.843 ************************************ 00:09:30.843 END TEST nvmf_lvol 00:09:30.843 ************************************ 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.843 ************************************ 00:09:30.843 START TEST nvmf_lvs_grow 00:09:30.843 ************************************ 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:30.843 * Looking for test storage... 00:09:30.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.843 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:30.844 17:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.844 17:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:32.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:32.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.746 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:32.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:32.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:09:32.747 00:09:32.747 --- 10.0.0.2 ping statistics --- 00:09:32.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.747 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:09:32.747 00:09:32.747 --- 10.0.0.1 ping statistics --- 00:09:32.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.747 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2251669 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2251669 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2251669 ']' 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.747 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.747 [2024-07-23 17:57:40.319123] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:32.747 [2024-07-23 17:57:40.319202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.747 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.747 [2024-07-23 17:57:40.382376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.005 [2024-07-23 17:57:40.472338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.005 [2024-07-23 17:57:40.472396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.005 [2024-07-23 17:57:40.472409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.005 [2024-07-23 17:57:40.472420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.005 [2024-07-23 17:57:40.472429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.005 [2024-07-23 17:57:40.472464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.005 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.263 [2024-07-23 17:57:40.825580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.263 ************************************ 00:09:33.263 START TEST lvs_grow_clean 00:09:33.263 ************************************ 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.263 17:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.520 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:33.520 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:33.777 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae31203c-017a-4dd5-b016-823ec944bee2 00:09:33.777 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:33.777 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:34.034 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:34.034 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:34.034 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae31203c-017a-4dd5-b016-823ec944bee2 lvol 150 00:09:34.293 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5c88d658-6e59-432f-b9a9-91b1105b663a 00:09:34.293 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.293 17:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:34.550 [2024-07-23 17:57:42.112403] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:34.550 [2024-07-23 17:57:42.112486] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:34.550 true 00:09:34.550 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:34.550 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:34.807 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:34.808 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.065 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c88d658-6e59-432f-b9a9-91b1105b663a 00:09:35.322 17:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:35.580 [2024-07-23 17:57:43.099454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.580 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2252009 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2252009 /var/tmp/bdevperf.sock 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2252009 ']' 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.838 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:35.838 [2024-07-23 17:57:43.405504] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:35.838 [2024-07-23 17:57:43.405578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252009 ] 00:09:35.838 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.838 [2024-07-23 17:57:43.464718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.095 [2024-07-23 17:57:43.550592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.095 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.095 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:36.095 17:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:36.660 Nvme0n1 00:09:36.660 17:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:36.918 [ 00:09:36.918 { 00:09:36.918 "name": "Nvme0n1", 00:09:36.918 "aliases": [ 00:09:36.918 "5c88d658-6e59-432f-b9a9-91b1105b663a" 00:09:36.918 ], 00:09:36.918 "product_name": "NVMe disk", 00:09:36.918 "block_size": 4096, 00:09:36.918 "num_blocks": 38912, 00:09:36.918 "uuid": "5c88d658-6e59-432f-b9a9-91b1105b663a", 00:09:36.918 "assigned_rate_limits": { 00:09:36.918 "rw_ios_per_sec": 0, 00:09:36.918 "rw_mbytes_per_sec": 0, 00:09:36.918 "r_mbytes_per_sec": 0, 00:09:36.918 "w_mbytes_per_sec": 0 00:09:36.918 }, 00:09:36.918 "claimed": false, 00:09:36.918 "zoned": false, 00:09:36.918 "supported_io_types": { 00:09:36.918 "read": true, 00:09:36.918 "write": true, 00:09:36.918 "unmap": true, 00:09:36.918 "flush": true, 00:09:36.918 "reset": true, 00:09:36.918 "nvme_admin": true, 00:09:36.918 "nvme_io": true, 00:09:36.918 "nvme_io_md": false, 00:09:36.918 "write_zeroes": true, 00:09:36.918 "zcopy": false, 00:09:36.918 "get_zone_info": false, 00:09:36.918 "zone_management": false, 00:09:36.918 "zone_append": false, 00:09:36.918 "compare": true, 00:09:36.918 "compare_and_write": true, 00:09:36.918 "abort": true, 00:09:36.918 "seek_hole": false, 00:09:36.918 "seek_data": false, 00:09:36.918 "copy": true, 00:09:36.918 "nvme_iov_md": false 00:09:36.918 }, 00:09:36.918 "memory_domains": [ 00:09:36.918 { 00:09:36.918 "dma_device_id": "system", 00:09:36.918 "dma_device_type": 1 00:09:36.918 } 00:09:36.918 ], 00:09:36.918 "driver_specific": { 00:09:36.918 "nvme": [ 00:09:36.918 { 00:09:36.918 "trid": { 00:09:36.918 "trtype": "TCP", 00:09:36.918 "adrfam": "IPv4", 00:09:36.918 "traddr": "10.0.0.2", 00:09:36.918 "trsvcid": "4420", 00:09:36.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:36.918 }, 00:09:36.918 "ctrlr_data": { 00:09:36.918 "cntlid": 1, 00:09:36.918 "vendor_id": "0x8086", 00:09:36.918 "model_number": "SPDK bdev Controller", 00:09:36.918 "serial_number": "SPDK0", 00:09:36.918 "firmware_revision": "24.09", 00:09:36.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:36.918 "oacs": { 00:09:36.918 "security": 0, 00:09:36.918 "format": 0, 00:09:36.918 "firmware": 0, 00:09:36.918 "ns_manage": 0 00:09:36.918 }, 00:09:36.918 "multi_ctrlr": true, 00:09:36.918 "ana_reporting": false 00:09:36.918 }, 00:09:36.918 "vs": { 00:09:36.918 "nvme_version": "1.3" 00:09:36.918 }, 00:09:36.918 "ns_data": { 00:09:36.918 "id": 1, 00:09:36.918 "can_share": true 00:09:36.918 } 00:09:36.918 } 00:09:36.918 ], 00:09:36.918 "mp_policy": "active_passive" 00:09:36.918 } 00:09:36.918 } 00:09:36.918 ] 00:09:36.918 17:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2252145 00:09:36.918 17:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.918 17:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:36.918 Running I/O for 10 seconds... 00:09:37.851 Latency(us) 00:09:37.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.851 Nvme0n1 : 1.00 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:37.851 =================================================================================================================== 00:09:37.851 Total : 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:37.851 00:09:38.784 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:39.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.046 Nvme0n1 : 2.00 15465.00 60.41 0.00 0.00 0.00 0.00 0.00 00:09:39.046 =================================================================================================================== 00:09:39.046 Total : 15465.00 60.41 0.00 0.00 0.00 0.00 0.00 00:09:39.046 00:09:39.046 true 00:09:39.046 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:39.046 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:39.346 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:39.346 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:39.346 17:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2252145 00:09:39.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.911 Nvme0n1 : 3.00 15517.00 60.61 0.00 0.00 0.00 0.00 0.00 00:09:39.911 =================================================================================================================== 00:09:39.911 Total : 15517.00 60.61 0.00 0.00 0.00 0.00 0.00 00:09:39.911 00:09:40.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.844 Nvme0n1 : 4.00 15607.75 60.97 0.00 0.00 0.00 0.00 0.00 00:09:40.844 =================================================================================================================== 00:09:40.844 Total : 15607.75 60.97 0.00 0.00 0.00 0.00 0.00 00:09:40.844 00:09:42.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.219 Nvme0n1 : 5.00 15673.80 61.23 0.00 0.00 0.00 0.00 0.00 00:09:42.219 =================================================================================================================== 00:09:42.219 Total : 15673.80 61.23 0.00 0.00 0.00 0.00 0.00 00:09:42.219 00:09:43.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.152 Nvme0n1 : 6.00 15707.33 61.36 0.00 0.00 0.00 0.00 0.00 00:09:43.152 =================================================================================================================== 00:09:43.152 Total : 15707.33 61.36 0.00 0.00 0.00 0.00 0.00 00:09:43.152 00:09:44.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.087 Nvme0n1 : 7.00 15749.43 61.52 0.00 0.00 0.00 0.00 0.00 00:09:44.087 =================================================================================================================== 00:09:44.087 Total : 15749.43 61.52 0.00 0.00 0.00 0.00 0.00 00:09:44.087 00:09:45.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.020 Nvme0n1 : 8.00 15789.12 61.68 0.00 0.00 0.00 0.00 0.00 00:09:45.020 =================================================================================================================== 00:09:45.020 Total : 15789.12 61.68 0.00 0.00 0.00 0.00 0.00 00:09:45.020 00:09:45.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.955 Nvme0n1 : 9.00 15826.89 61.82 0.00 0.00 0.00 0.00 0.00 00:09:45.955 =================================================================================================================== 00:09:45.955 Total : 15826.89 61.82 0.00 0.00 0.00 0.00 0.00 00:09:45.955 00:09:46.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.889 Nvme0n1 : 10.00 15844.40 61.89 0.00 0.00 0.00 0.00 0.00 00:09:46.889 =================================================================================================================== 00:09:46.889 Total : 15844.40 61.89 0.00 0.00 0.00 0.00 0.00 00:09:46.889 00:09:46.889 00:09:46.889 Latency(us) 00:09:46.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.889 Nvme0n1 : 10.00 15850.71 61.92 0.00 0.00 8070.79 5170.06 18932.62 00:09:46.889 =================================================================================================================== 00:09:46.889 Total : 15850.71 61.92 0.00 0.00 8070.79 5170.06 18932.62 00:09:46.889 0 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2252009 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2252009 ']' 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2252009 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2252009 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2252009' 00:09:46.889 killing process with pid 2252009 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2252009 00:09:46.889 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.889 00:09:46.889 Latency(us) 00:09:46.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.889 =================================================================================================================== 00:09:46.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.889 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2252009 00:09:47.147 17:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.404 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:47.662 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:47.662 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:47.920 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:47.920 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:47.920 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.178 [2024-07-23 17:57:55.745047] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:48.178 17:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:48.436 request: 00:09:48.436 { 00:09:48.436 "uuid": "ae31203c-017a-4dd5-b016-823ec944bee2", 00:09:48.436 "method": "bdev_lvol_get_lvstores", 00:09:48.436 "req_id": 1 00:09:48.436 } 00:09:48.436 Got JSON-RPC error response 00:09:48.436 response: 00:09:48.436 { 00:09:48.436 "code": -19, 00:09:48.436 "message": "No such device" 00:09:48.436 } 00:09:48.436 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:48.436 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:48.436 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:48.436 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:48.436 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.694 aio_bdev 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5c88d658-6e59-432f-b9a9-91b1105b663a 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5c88d658-6e59-432f-b9a9-91b1105b663a 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:48.694 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:48.952 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c88d658-6e59-432f-b9a9-91b1105b663a -t 2000 00:09:49.210 [ 00:09:49.210 { 00:09:49.210 "name": "5c88d658-6e59-432f-b9a9-91b1105b663a", 00:09:49.210 "aliases": [ 00:09:49.210 "lvs/lvol" 00:09:49.210 ], 00:09:49.210 "product_name": "Logical Volume", 00:09:49.210 "block_size": 4096, 00:09:49.210 "num_blocks": 38912, 00:09:49.210 "uuid": "5c88d658-6e59-432f-b9a9-91b1105b663a", 00:09:49.210 "assigned_rate_limits": { 00:09:49.210 "rw_ios_per_sec": 0, 00:09:49.210 "rw_mbytes_per_sec": 0, 00:09:49.210 "r_mbytes_per_sec": 0, 00:09:49.210 "w_mbytes_per_sec": 0 00:09:49.210 }, 00:09:49.210 "claimed": false, 00:09:49.210 "zoned": false, 00:09:49.210 "supported_io_types": { 00:09:49.210 "read": true, 00:09:49.210 "write": true, 00:09:49.210 "unmap": true, 00:09:49.210 "flush": false, 00:09:49.210 "reset": true, 00:09:49.210 "nvme_admin": false, 00:09:49.210 "nvme_io": false, 00:09:49.210 "nvme_io_md": false, 00:09:49.210 "write_zeroes": true, 00:09:49.210 "zcopy": false, 00:09:49.210 "get_zone_info": false, 00:09:49.210 "zone_management": false, 00:09:49.210 "zone_append": false, 00:09:49.210 "compare": false, 00:09:49.210 "compare_and_write": false, 00:09:49.210 "abort": false, 00:09:49.210 "seek_hole": true, 00:09:49.210 "seek_data": true, 00:09:49.210 "copy": false, 00:09:49.210 "nvme_iov_md": false 00:09:49.210 }, 00:09:49.210 "driver_specific": { 00:09:49.210 "lvol": { 00:09:49.210 "lvol_store_uuid": "ae31203c-017a-4dd5-b016-823ec944bee2", 00:09:49.210 "base_bdev": "aio_bdev", 00:09:49.210 "thin_provision": false, 00:09:49.210 "num_allocated_clusters": 38, 00:09:49.210 "snapshot": false, 00:09:49.210 "clone": false, 00:09:49.210 "esnap_clone": false 00:09:49.210 } 00:09:49.210 } 00:09:49.210 } 00:09:49.210 ] 00:09:49.210 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:49.210 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:49.210 17:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:49.468 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:49.468 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:49.468 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:49.726 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:49.726 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c88d658-6e59-432f-b9a9-91b1105b663a 00:09:49.984 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae31203c-017a-4dd5-b016-823ec944bee2 00:09:50.242 17:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.500 00:09:50.500 real 0m17.234s 00:09:50.500 user 0m16.622s 00:09:50.500 sys 0m1.891s 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:50.500 ************************************ 00:09:50.500 END TEST lvs_grow_clean 00:09:50.500 ************************************ 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.500 ************************************ 00:09:50.500 START TEST lvs_grow_dirty 00:09:50.500 ************************************ 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:50.500 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:50.501 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:50.501 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:50.501 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:50.501 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:50.501 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.758 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.758 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.016 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:51.016 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:51.275 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:09:51.275 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:09:51.275 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:51.533 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:51.533 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:51.533 17:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 lvol 150 00:09:51.791 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:09:51.791 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:51.791 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:52.048 [2024-07-23 17:57:59.457729] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:52.048 [2024-07-23 17:57:59.457813] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:52.048 true 00:09:52.048 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:09:52.048 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:52.305 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:52.305 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:52.305 17:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:09:52.563 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:52.820 [2024-07-23 17:58:00.432794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.820 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2254187 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2254187 /var/tmp/bdevperf.sock 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2254187 ']' 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:53.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.078 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.078 [2024-07-23 17:58:00.730560] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:09:53.078 [2024-07-23 17:58:00.730649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254187 ] 00:09:53.335 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.335 [2024-07-23 17:58:00.788025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.335 [2024-07-23 17:58:00.871214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.335 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.335 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:53.336 17:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:53.899 Nvme0n1 00:09:53.899 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:54.156 [ 00:09:54.156 { 00:09:54.156 "name": "Nvme0n1", 00:09:54.156 "aliases": [ 00:09:54.156 "89610f64-691b-4cf5-9cc2-5e8fca68f5b4" 00:09:54.156 ], 00:09:54.156 "product_name": "NVMe disk", 00:09:54.156 "block_size": 4096, 00:09:54.156 "num_blocks": 38912, 00:09:54.156 "uuid": "89610f64-691b-4cf5-9cc2-5e8fca68f5b4", 00:09:54.156 "assigned_rate_limits": { 00:09:54.156 "rw_ios_per_sec": 0, 00:09:54.156 "rw_mbytes_per_sec": 0, 00:09:54.156 "r_mbytes_per_sec": 0, 00:09:54.156 "w_mbytes_per_sec": 0 00:09:54.156 }, 00:09:54.156 "claimed": false, 00:09:54.156 "zoned": false, 00:09:54.156 "supported_io_types": { 00:09:54.156 "read": true, 00:09:54.156 "write": true, 00:09:54.156 "unmap": true, 00:09:54.156 "flush": true, 00:09:54.156 "reset": true, 00:09:54.156 "nvme_admin": true, 00:09:54.156 "nvme_io": true, 00:09:54.156 "nvme_io_md": false, 00:09:54.156 "write_zeroes": true, 00:09:54.156 "zcopy": false, 00:09:54.156 "get_zone_info": false, 00:09:54.156 "zone_management": false, 00:09:54.156 "zone_append": false, 00:09:54.156 "compare": true, 00:09:54.156 "compare_and_write": true, 00:09:54.156 "abort": true, 00:09:54.156 "seek_hole": false, 00:09:54.156 "seek_data": false, 00:09:54.156 "copy": true, 00:09:54.156 "nvme_iov_md": false 00:09:54.156 }, 00:09:54.156 "memory_domains": [ 00:09:54.156 { 00:09:54.156 "dma_device_id": "system", 00:09:54.156 "dma_device_type": 1 00:09:54.156 } 00:09:54.156 ], 00:09:54.156 "driver_specific": { 00:09:54.156 "nvme": [ 00:09:54.156 { 00:09:54.156 "trid": { 00:09:54.156 "trtype": "TCP", 00:09:54.156 "adrfam": "IPv4", 00:09:54.156 "traddr": "10.0.0.2", 00:09:54.156 "trsvcid": "4420", 00:09:54.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:54.156 }, 00:09:54.156 "ctrlr_data": { 00:09:54.156 "cntlid": 1, 00:09:54.156 "vendor_id": "0x8086", 00:09:54.156 "model_number": "SPDK bdev Controller", 00:09:54.156 "serial_number": "SPDK0", 00:09:54.156 "firmware_revision": "24.09", 00:09:54.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:54.156 "oacs": { 00:09:54.156 "security": 0, 00:09:54.156 "format": 0, 00:09:54.156 "firmware": 0, 00:09:54.156 "ns_manage": 0 00:09:54.156 }, 00:09:54.156 "multi_ctrlr": true, 00:09:54.156 "ana_reporting": false 00:09:54.156 }, 00:09:54.156 "vs": { 00:09:54.156 "nvme_version": "1.3" 00:09:54.156 }, 00:09:54.156 "ns_data": { 00:09:54.156 "id": 1, 00:09:54.156 "can_share": true 00:09:54.156 } 00:09:54.156 } 00:09:54.156 ], 00:09:54.156 "mp_policy": "active_passive" 00:09:54.156 } 00:09:54.156 } 00:09:54.156 ] 00:09:54.156 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2254322 00:09:54.156 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:54.156 17:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:54.156 Running I/O for 10 seconds... 00:09:55.089 Latency(us) 00:09:55.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.089 Nvme0n1 : 1.00 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:55.089 =================================================================================================================== 00:09:55.089 Total : 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:55.089 00:09:56.021 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:09:56.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.278 Nvme0n1 : 2.00 15432.50 60.28 0.00 0.00 0.00 0.00 0.00 00:09:56.278 =================================================================================================================== 00:09:56.278 Total : 15432.50 60.28 0.00 0.00 0.00 0.00 0.00 00:09:56.278 00:09:56.278 true 00:09:56.278 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:09:56.278 17:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:56.536 17:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:56.536 17:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:56.536 17:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2254322 00:09:57.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.144 Nvme0n1 : 3.00 15495.33 60.53 0.00 0.00 0.00 0.00 0.00 00:09:57.144 =================================================================================================================== 00:09:57.144 Total : 15495.33 60.53 0.00 0.00 0.00 0.00 0.00 00:09:57.144 00:09:58.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.075 Nvme0n1 : 4.00 15590.25 60.90 0.00 0.00 0.00 0.00 0.00 00:09:58.075 =================================================================================================================== 00:09:58.075 Total : 15590.25 60.90 0.00 0.00 0.00 0.00 0.00 00:09:58.075 00:09:59.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.446 Nvme0n1 : 5.00 15660.00 61.17 0.00 0.00 0.00 0.00 0.00 00:09:59.446 =================================================================================================================== 00:09:59.446 Total : 15660.00 61.17 0.00 0.00 0.00 0.00 0.00 00:09:59.446 00:10:00.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.378 Nvme0n1 : 6.00 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:10:00.378 =================================================================================================================== 00:10:00.378 Total : 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:10:00.378 00:10:01.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.311 Nvme0n1 : 7.00 15766.71 61.59 0.00 0.00 0.00 0.00 0.00 00:10:01.311 =================================================================================================================== 00:10:01.311 Total : 15766.71 61.59 0.00 0.00 0.00 0.00 0.00 00:10:01.311 00:10:02.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.246 Nvme0n1 : 8.00 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:10:02.246 =================================================================================================================== 00:10:02.246 Total : 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:10:02.246 00:10:03.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.180 Nvme0n1 : 9.00 15840.33 61.88 0.00 0.00 0.00 0.00 0.00 00:10:03.180 =================================================================================================================== 00:10:03.180 Total : 15840.33 61.88 0.00 0.00 0.00 0.00 0.00 00:10:03.180 00:10:04.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.118 Nvme0n1 : 10.00 15856.50 61.94 0.00 0.00 0.00 0.00 0.00 00:10:04.118 =================================================================================================================== 00:10:04.118 Total : 15856.50 61.94 0.00 0.00 0.00 0.00 0.00 00:10:04.118 00:10:04.118 00:10:04.118 Latency(us) 00:10:04.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.118 Nvme0n1 : 10.01 15861.17 61.96 0.00 0.00 8065.48 5194.33 17476.27 00:10:04.118 =================================================================================================================== 00:10:04.118 Total : 15861.17 61.96 0.00 0.00 8065.48 5194.33 17476.27 00:10:04.118 0 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2254187 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2254187 ']' 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2254187 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.118 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254187 00:10:04.376 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:04.376 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:04.376 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254187' 00:10:04.376 killing process with pid 2254187 00:10:04.376 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2254187 00:10:04.376 Received shutdown signal, test time was about 10.000000 seconds 00:10:04.376 00:10:04.376 Latency(us) 00:10:04.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.376 =================================================================================================================== 00:10:04.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:04.376 17:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2254187 00:10:04.376 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.633 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2251669 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2251669 00:10:05.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2251669 Killed "${NVMF_APP[@]}" "$@" 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2255547 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2255547 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2255547 ']' 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.199 17:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.458 [2024-07-23 17:58:12.877662] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:05.458 [2024-07-23 17:58:12.877756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.458 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.458 [2024-07-23 17:58:12.943029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.458 [2024-07-23 17:58:13.029372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.458 [2024-07-23 17:58:13.029437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.458 [2024-07-23 17:58:13.029466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.458 [2024-07-23 17:58:13.029477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.458 [2024-07-23 17:58:13.029487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.458 [2024-07-23 17:58:13.029521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.715 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.974 [2024-07-23 17:58:13.388597] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:05.974 [2024-07-23 17:58:13.388747] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:05.974 [2024-07-23 17:58:13.388794] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:05.974 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:06.232 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 -t 2000 00:10:06.232 [ 00:10:06.232 { 00:10:06.232 "name": "89610f64-691b-4cf5-9cc2-5e8fca68f5b4", 00:10:06.232 "aliases": [ 00:10:06.232 "lvs/lvol" 00:10:06.232 ], 00:10:06.232 "product_name": "Logical Volume", 00:10:06.232 "block_size": 4096, 00:10:06.232 "num_blocks": 38912, 00:10:06.232 "uuid": "89610f64-691b-4cf5-9cc2-5e8fca68f5b4", 00:10:06.232 "assigned_rate_limits": { 00:10:06.232 "rw_ios_per_sec": 0, 00:10:06.232 "rw_mbytes_per_sec": 0, 00:10:06.232 "r_mbytes_per_sec": 0, 00:10:06.232 "w_mbytes_per_sec": 0 00:10:06.232 }, 00:10:06.232 "claimed": false, 00:10:06.232 "zoned": false, 00:10:06.232 "supported_io_types": { 00:10:06.232 "read": true, 00:10:06.232 "write": true, 00:10:06.232 "unmap": true, 00:10:06.232 "flush": false, 00:10:06.232 "reset": true, 00:10:06.232 "nvme_admin": false, 00:10:06.232 "nvme_io": false, 00:10:06.232 "nvme_io_md": false, 00:10:06.232 "write_zeroes": true, 00:10:06.232 "zcopy": false, 00:10:06.232 "get_zone_info": false, 00:10:06.232 "zone_management": false, 00:10:06.232 "zone_append": false, 00:10:06.232 "compare": false, 00:10:06.232 "compare_and_write": false, 00:10:06.232 "abort": false, 00:10:06.232 "seek_hole": true, 00:10:06.232 "seek_data": true, 00:10:06.232 "copy": false, 00:10:06.232 "nvme_iov_md": false 00:10:06.232 }, 00:10:06.232 "driver_specific": { 00:10:06.232 "lvol": { 00:10:06.232 "lvol_store_uuid": "9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9", 00:10:06.232 "base_bdev": "aio_bdev", 00:10:06.232 "thin_provision": false, 00:10:06.232 "num_allocated_clusters": 38, 00:10:06.232 "snapshot": false, 00:10:06.232 "clone": false, 00:10:06.232 "esnap_clone": false 00:10:06.232 } 00:10:06.232 } 00:10:06.232 } 00:10:06.232 ] 00:10:06.491 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:06.491 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:06.491 17:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:06.491 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:06.491 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:06.491 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:06.749 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:06.749 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:07.007 [2024-07-23 17:58:14.621683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:07.007 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:07.008 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:07.266 request: 00:10:07.266 { 00:10:07.266 "uuid": "9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9", 00:10:07.266 "method": "bdev_lvol_get_lvstores", 00:10:07.266 "req_id": 1 00:10:07.266 } 00:10:07.266 Got JSON-RPC error response 00:10:07.266 response: 00:10:07.266 { 00:10:07.266 "code": -19, 00:10:07.266 "message": "No such device" 00:10:07.266 } 00:10:07.266 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:07.266 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.266 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:07.266 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.266 17:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:07.525 aio_bdev 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:07.525 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:07.783 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 -t 2000 00:10:08.042 [ 00:10:08.042 { 00:10:08.042 "name": "89610f64-691b-4cf5-9cc2-5e8fca68f5b4", 00:10:08.042 "aliases": [ 00:10:08.042 "lvs/lvol" 00:10:08.042 ], 00:10:08.042 "product_name": "Logical Volume", 00:10:08.042 "block_size": 4096, 00:10:08.042 "num_blocks": 38912, 00:10:08.042 "uuid": "89610f64-691b-4cf5-9cc2-5e8fca68f5b4", 00:10:08.042 "assigned_rate_limits": { 00:10:08.042 "rw_ios_per_sec": 0, 00:10:08.042 "rw_mbytes_per_sec": 0, 00:10:08.042 "r_mbytes_per_sec": 0, 00:10:08.042 "w_mbytes_per_sec": 0 00:10:08.042 }, 00:10:08.042 "claimed": false, 00:10:08.042 "zoned": false, 00:10:08.042 "supported_io_types": { 00:10:08.042 "read": true, 00:10:08.042 "write": true, 00:10:08.042 "unmap": true, 00:10:08.042 "flush": false, 00:10:08.042 "reset": true, 00:10:08.042 "nvme_admin": false, 00:10:08.042 "nvme_io": false, 00:10:08.042 "nvme_io_md": false, 00:10:08.042 "write_zeroes": true, 00:10:08.042 "zcopy": false, 00:10:08.042 "get_zone_info": false, 00:10:08.042 "zone_management": false, 00:10:08.042 "zone_append": false, 00:10:08.042 "compare": false, 00:10:08.042 "compare_and_write": false, 00:10:08.042 "abort": false, 00:10:08.042 "seek_hole": true, 00:10:08.042 "seek_data": true, 00:10:08.042 "copy": false, 00:10:08.042 "nvme_iov_md": false 00:10:08.042 }, 00:10:08.043 "driver_specific": { 00:10:08.043 "lvol": { 00:10:08.043 "lvol_store_uuid": "9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9", 00:10:08.043 "base_bdev": "aio_bdev", 00:10:08.043 "thin_provision": false, 00:10:08.043 "num_allocated_clusters": 38, 00:10:08.043 "snapshot": false, 00:10:08.043 "clone": false, 00:10:08.043 "esnap_clone": false 00:10:08.043 } 00:10:08.043 } 00:10:08.043 } 00:10:08.043 ] 00:10:08.043 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:08.043 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:08.043 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:08.301 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:08.301 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:08.301 17:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:08.558 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:08.558 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89610f64-691b-4cf5-9cc2-5e8fca68f5b4 00:10:08.815 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d4ebedd-c0ec-41f4-bc79-831cb02f5fb9 00:10:09.073 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:09.331 00:10:09.331 real 0m18.755s 00:10:09.331 user 0m47.593s 00:10:09.331 sys 0m4.668s 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:09.331 ************************************ 00:10:09.331 END TEST lvs_grow_dirty 00:10:09.331 ************************************ 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:09.331 nvmf_trace.0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.331 17:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.331 rmmod nvme_tcp 00:10:09.589 rmmod nvme_fabrics 00:10:09.589 rmmod nvme_keyring 00:10:09.589 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2255547 ']' 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2255547 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2255547 ']' 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2255547 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2255547 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2255547' 00:10:09.590 killing process with pid 2255547 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2255547 00:10:09.590 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2255547 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.849 17:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.755 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.755 00:10:11.755 real 0m41.397s 00:10:11.755 user 1m9.778s 00:10:11.755 sys 0m8.485s 00:10:11.755 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.755 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.755 ************************************ 00:10:11.755 END TEST nvmf_lvs_grow 00:10:11.755 ************************************ 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 ************************************ 00:10:11.756 START TEST nvmf_bdev_io_wait 00:10:11.756 ************************************ 00:10:11.756 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:12.015 * Looking for test storage... 00:10:12.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.015 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.016 17:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.548 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.549 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:10:14.550 00:10:14.550 --- 10.0.0.2 ping statistics --- 00:10:14.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.550 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:10:14.550 00:10:14.550 --- 10.0.0.1 ping statistics --- 00:10:14.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.550 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2258064 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2258064 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2258064 ']' 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.550 17:58:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 [2024-07-23 17:58:21.844684] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:14.550 [2024-07-23 17:58:21.844767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.550 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.550 [2024-07-23 17:58:21.909199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.550 [2024-07-23 17:58:21.998107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.550 [2024-07-23 17:58:21.998163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.550 [2024-07-23 17:58:21.998192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.550 [2024-07-23 17:58:21.998204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.550 [2024-07-23 17:58:21.998214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.550 [2024-07-23 17:58:21.998679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.550 [2024-07-23 17:58:21.998699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.550 [2024-07-23 17:58:21.998757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.550 [2024-07-23 17:58:21.998760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.550 [2024-07-23 17:58:22.167584] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.550 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.809 Malloc0 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.809 [2024-07-23 17:58:22.235784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2258202 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2258204 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2258206 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.809 { 00:10:14.809 "params": { 00:10:14.809 "name": "Nvme$subsystem", 00:10:14.809 "trtype": "$TEST_TRANSPORT", 00:10:14.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.809 "adrfam": "ipv4", 00:10:14.809 "trsvcid": "$NVMF_PORT", 00:10:14.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.809 "hdgst": ${hdgst:-false}, 00:10:14.809 "ddgst": ${ddgst:-false} 00:10:14.809 }, 00:10:14.809 "method": "bdev_nvme_attach_controller" 00:10:14.809 } 00:10:14.809 EOF 00:10:14.809 )") 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:14.809 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.810 { 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme$subsystem", 00:10:14.810 "trtype": "$TEST_TRANSPORT", 00:10:14.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "$NVMF_PORT", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.810 "hdgst": ${hdgst:-false}, 00:10:14.810 "ddgst": ${ddgst:-false} 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 } 00:10:14.810 EOF 00:10:14.810 )") 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2258208 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.810 { 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme$subsystem", 00:10:14.810 "trtype": "$TEST_TRANSPORT", 00:10:14.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "$NVMF_PORT", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.810 "hdgst": ${hdgst:-false}, 00:10:14.810 "ddgst": ${ddgst:-false} 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 } 00:10:14.810 EOF 00:10:14.810 )") 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.810 { 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme$subsystem", 00:10:14.810 "trtype": "$TEST_TRANSPORT", 00:10:14.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "$NVMF_PORT", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.810 "hdgst": ${hdgst:-false}, 00:10:14.810 "ddgst": ${ddgst:-false} 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 } 00:10:14.810 EOF 00:10:14.810 )") 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2258202 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme1", 00:10:14.810 "trtype": "tcp", 00:10:14.810 "traddr": "10.0.0.2", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "4420", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.810 "hdgst": false, 00:10:14.810 "ddgst": false 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 }' 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme1", 00:10:14.810 "trtype": "tcp", 00:10:14.810 "traddr": "10.0.0.2", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "4420", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.810 "hdgst": false, 00:10:14.810 "ddgst": false 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 }' 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme1", 00:10:14.810 "trtype": "tcp", 00:10:14.810 "traddr": "10.0.0.2", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "4420", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.810 "hdgst": false, 00:10:14.810 "ddgst": false 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 }' 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.810 17:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.810 "params": { 00:10:14.810 "name": "Nvme1", 00:10:14.810 "trtype": "tcp", 00:10:14.810 "traddr": "10.0.0.2", 00:10:14.810 "adrfam": "ipv4", 00:10:14.810 "trsvcid": "4420", 00:10:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.810 "hdgst": false, 00:10:14.810 "ddgst": false 00:10:14.810 }, 00:10:14.810 "method": "bdev_nvme_attach_controller" 00:10:14.810 }' 00:10:14.810 [2024-07-23 17:58:22.284571] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:14.810 [2024-07-23 17:58:22.284571] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:14.810 [2024-07-23 17:58:22.284571] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:14.810 [2024-07-23 17:58:22.284576] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:14.810 [2024-07-23 17:58:22.284662] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 17:58:22.284663] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 17:58:22.284664] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 17:58:22.284665] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:14.810 --proc-type=auto ] 00:10:14.810 --proc-type=auto ] 00:10:14.810 --proc-type=auto ] 00:10:14.810 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.810 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.810 [2024-07-23 17:58:22.458921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.069 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.069 [2024-07-23 17:58:22.534660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.069 [2024-07-23 17:58:22.559089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.069 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.069 [2024-07-23 17:58:22.634169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.069 [2024-07-23 17:58:22.660346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.069 [2024-07-23 17:58:22.728060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.327 [2024-07-23 17:58:22.733861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.327 [2024-07-23 17:58:22.795027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:15.327 Running I/O for 1 seconds... 00:10:15.327 Running I/O for 1 seconds... 00:10:15.586 Running I/O for 1 seconds... 00:10:15.586 Running I/O for 1 seconds... 00:10:16.547 00:10:16.547 Latency(us) 00:10:16.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.547 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:16.547 Nvme1n1 : 1.00 195351.92 763.09 0.00 0.00 652.60 267.00 958.77 00:10:16.547 =================================================================================================================== 00:10:16.547 Total : 195351.92 763.09 0.00 0.00 652.60 267.00 958.77 00:10:16.547 00:10:16.547 Latency(us) 00:10:16.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.547 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:16.547 Nvme1n1 : 1.02 6580.30 25.70 0.00 0.00 19296.04 8349.77 30486.38 00:10:16.547 =================================================================================================================== 00:10:16.547 Total : 6580.30 25.70 0.00 0.00 19296.04 8349.77 30486.38 00:10:16.547 00:10:16.547 Latency(us) 00:10:16.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.547 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:16.547 Nvme1n1 : 1.01 9343.81 36.50 0.00 0.00 13629.59 9223.59 26408.58 00:10:16.547 =================================================================================================================== 00:10:16.547 Total : 9343.81 36.50 0.00 0.00 13629.59 9223.59 26408.58 00:10:16.547 00:10:16.547 Latency(us) 00:10:16.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.547 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:16.547 Nvme1n1 : 1.00 6439.34 25.15 0.00 0.00 19822.89 4805.97 46215.02 00:10:16.547 =================================================================================================================== 00:10:16.547 Total : 6439.34 25.15 0.00 0.00 19822.89 4805.97 46215.02 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2258204 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2258206 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2258208 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.805 rmmod nvme_tcp 00:10:16.805 rmmod nvme_fabrics 00:10:16.805 rmmod nvme_keyring 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2258064 ']' 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2258064 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2258064 ']' 00:10:16.805 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2258064 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258064 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258064' 00:10:17.064 killing process with pid 2258064 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2258064 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2258064 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.064 17:58:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.608 00:10:19.608 real 0m7.362s 00:10:19.608 user 0m16.874s 00:10:19.608 sys 0m3.498s 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:19.608 ************************************ 00:10:19.608 END TEST nvmf_bdev_io_wait 00:10:19.608 ************************************ 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.608 ************************************ 00:10:19.608 START TEST nvmf_queue_depth 00:10:19.608 ************************************ 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:19.608 * Looking for test storage... 00:10:19.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:19.608 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.609 17:58:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.512 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:10:21.513 00:10:21.513 --- 10.0.0.2 ping statistics --- 00:10:21.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.513 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:10:21.513 00:10:21.513 --- 10.0.0.1 ping statistics --- 00:10:21.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.513 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.513 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.771 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:21.771 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.771 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2260438 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2260438 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2260438 ']' 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.772 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:21.772 [2024-07-23 17:58:29.225685] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:21.772 [2024-07-23 17:58:29.225760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.772 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.772 [2024-07-23 17:58:29.290259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.772 [2024-07-23 17:58:29.377023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.772 [2024-07-23 17:58:29.377090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.772 [2024-07-23 17:58:29.377118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.772 [2024-07-23 17:58:29.377130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.772 [2024-07-23 17:58:29.377139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.772 [2024-07-23 17:58:29.377167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 [2024-07-23 17:58:29.518177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 Malloc0 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 [2024-07-23 17:58:29.574817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2260464 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2260464 /var/tmp/bdevperf.sock 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2260464 ']' 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:22.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.030 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.030 [2024-07-23 17:58:29.618553] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:22.030 [2024-07-23 17:58:29.618633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260464 ] 00:10:22.030 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.030 [2024-07-23 17:58:29.676154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.288 [2024-07-23 17:58:29.761550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.288 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.288 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:22.288 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:22.288 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.288 17:58:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.546 NVMe0n1 00:10:22.546 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.546 17:58:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:22.546 Running I/O for 10 seconds... 00:10:34.745 00:10:34.745 Latency(us) 00:10:34.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:34.745 Verification LBA range: start 0x0 length 0x4000 00:10:34.745 NVMe0n1 : 10.09 8426.58 32.92 0.00 0.00 121052.47 25631.86 73011.96 00:10:34.745 =================================================================================================================== 00:10:34.745 Total : 8426.58 32.92 0.00 0.00 121052.47 25631.86 73011.96 00:10:34.745 0 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2260464 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2260464 ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2260464 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260464 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260464' 00:10:34.745 killing process with pid 2260464 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2260464 00:10:34.745 Received shutdown signal, test time was about 10.000000 seconds 00:10:34.745 00:10:34.745 Latency(us) 00:10:34.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.745 =================================================================================================================== 00:10:34.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2260464 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.745 rmmod nvme_tcp 00:10:34.745 rmmod nvme_fabrics 00:10:34.745 rmmod nvme_keyring 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2260438 ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2260438 ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260438' 00:10:34.745 killing process with pid 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2260438 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.745 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.746 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.746 17:58:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.316 00:10:35.316 real 0m16.094s 00:10:35.316 user 0m20.773s 00:10:35.316 sys 0m4.003s 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:35.316 ************************************ 00:10:35.316 END TEST nvmf_queue_depth 00:10:35.316 ************************************ 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.316 ************************************ 00:10:35.316 START TEST nvmf_target_multipath 00:10:35.316 ************************************ 00:10:35.316 17:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:35.575 * Looking for test storage... 00:10:35.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:35.575 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.576 17:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.113 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:38.114 00:10:38.114 --- 10.0.0.2 ping statistics --- 00:10:38.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.114 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:10:38.114 00:10:38.114 --- 10.0.0.1 ping statistics --- 00:10:38.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.114 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:38.114 only one NIC for nvmf test 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.114 rmmod nvme_tcp 00:10:38.114 rmmod nvme_fabrics 00:10:38.114 rmmod nvme_keyring 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.114 17:58:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.017 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.018 00:10:40.018 real 0m4.500s 00:10:40.018 user 0m0.889s 00:10:40.018 sys 0m1.615s 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.018 ************************************ 00:10:40.018 END TEST nvmf_target_multipath 00:10:40.018 ************************************ 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.018 ************************************ 00:10:40.018 START TEST nvmf_zcopy 00:10:40.018 ************************************ 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.018 * Looking for test storage... 00:10:40.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.018 17:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:42.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.547 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:42.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:42.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:42.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:10:42.548 00:10:42.548 --- 10.0.0.2 ping statistics --- 00:10:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.548 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:42.548 00:10:42.548 --- 10.0.0.1 ping statistics --- 00:10:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.548 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2265646 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2265646 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2265646 ']' 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.548 17:58:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.548 [2024-07-23 17:58:49.930669] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:42.548 [2024-07-23 17:58:49.930753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.548 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.548 [2024-07-23 17:58:49.994927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.548 [2024-07-23 17:58:50.091501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.548 [2024-07-23 17:58:50.091567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.548 [2024-07-23 17:58:50.091597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.548 [2024-07-23 17:58:50.091608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.548 [2024-07-23 17:58:50.091618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.548 [2024-07-23 17:58:50.091651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.548 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.548 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:42.548 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.548 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.548 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.806 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.806 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:42.806 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:42.806 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.806 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 [2024-07-23 17:58:50.230345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 [2024-07-23 17:58:50.246613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 malloc0 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:42.807 { 00:10:42.807 "params": { 00:10:42.807 "name": "Nvme$subsystem", 00:10:42.807 "trtype": "$TEST_TRANSPORT", 00:10:42.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.807 "adrfam": "ipv4", 00:10:42.807 "trsvcid": "$NVMF_PORT", 00:10:42.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.807 "hdgst": ${hdgst:-false}, 00:10:42.807 "ddgst": ${ddgst:-false} 00:10:42.807 }, 00:10:42.807 "method": "bdev_nvme_attach_controller" 00:10:42.807 } 00:10:42.807 EOF 00:10:42.807 )") 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:42.807 17:58:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:42.807 "params": { 00:10:42.807 "name": "Nvme1", 00:10:42.807 "trtype": "tcp", 00:10:42.807 "traddr": "10.0.0.2", 00:10:42.807 "adrfam": "ipv4", 00:10:42.807 "trsvcid": "4420", 00:10:42.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.807 "hdgst": false, 00:10:42.807 "ddgst": false 00:10:42.807 }, 00:10:42.807 "method": "bdev_nvme_attach_controller" 00:10:42.807 }' 00:10:42.807 [2024-07-23 17:58:50.339862] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:42.807 [2024-07-23 17:58:50.339952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265675 ] 00:10:42.807 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.807 [2024-07-23 17:58:50.404444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.065 [2024-07-23 17:58:50.491460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.065 Running I/O for 10 seconds... 00:10:55.309 00:10:55.309 Latency(us) 00:10:55.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.309 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:55.309 Verification LBA range: start 0x0 length 0x1000 00:10:55.309 Nvme1n1 : 10.02 5582.44 43.61 0.00 0.00 22868.28 452.08 33593.27 00:10:55.309 =================================================================================================================== 00:10:55.309 Total : 5582.44 43.61 0.00 0.00 22868.28 452.08 33593.27 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2266985 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.309 { 00:10:55.309 "params": { 00:10:55.309 "name": "Nvme$subsystem", 00:10:55.309 "trtype": "$TEST_TRANSPORT", 00:10:55.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.309 "adrfam": "ipv4", 00:10:55.309 "trsvcid": "$NVMF_PORT", 00:10:55.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.309 "hdgst": ${hdgst:-false}, 00:10:55.309 "ddgst": ${ddgst:-false} 00:10:55.309 }, 00:10:55.309 "method": "bdev_nvme_attach_controller" 00:10:55.309 } 00:10:55.309 EOF 00:10:55.309 )") 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:55.309 [2024-07-23 17:59:00.975634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.309 [2024-07-23 17:59:00.975691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:55.309 17:59:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.309 "params": { 00:10:55.309 "name": "Nvme1", 00:10:55.309 "trtype": "tcp", 00:10:55.309 "traddr": "10.0.0.2", 00:10:55.309 "adrfam": "ipv4", 00:10:55.309 "trsvcid": "4420", 00:10:55.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.309 "hdgst": false, 00:10:55.309 "ddgst": false 00:10:55.309 }, 00:10:55.309 "method": "bdev_nvme_attach_controller" 00:10:55.309 }' 00:10:55.309 [2024-07-23 17:59:00.983575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.309 [2024-07-23 17:59:00.983613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.309 [2024-07-23 17:59:00.991589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.309 [2024-07-23 17:59:00.991626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.309 [2024-07-23 17:59:00.999622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.309 [2024-07-23 17:59:00.999643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.309 [2024-07-23 17:59:01.007644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.309 [2024-07-23 17:59:01.007680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.309 [2024-07-23 17:59:01.011134] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:10:55.310 [2024-07-23 17:59:01.011205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266985 ] 00:10:55.310 [2024-07-23 17:59:01.015677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.015697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.023691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.023711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.031724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.031744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.039735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.039757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.310 [2024-07-23 17:59:01.047753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.047772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.055776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.055797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.063794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.063814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.071816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.071836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.073249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.310 [2024-07-23 17:59:01.079860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.079887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.087891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.087927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.095880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.095900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.103903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.103923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.111926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.111946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.119948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.119969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.127993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.128025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.135998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.136023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.144012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.144033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.152031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.152062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.160053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.160074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.164849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.310 [2024-07-23 17:59:01.168073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.168093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.176095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.176114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.184145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.184178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.192171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.192220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.200192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.200228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.208216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.208251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.216233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.216269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.224256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.224291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.232254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.232275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.240295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.240346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.248343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.248393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.256379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.256418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.264358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.264394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.272398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.272431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.280423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.280450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.288440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.288466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.296450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.296473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.304475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.304500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.312501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.312527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.320524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.320549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.328550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.328573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.374554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.374581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 [2024-07-23 17:59:01.380701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.310 [2024-07-23 17:59:01.380723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.310 Running I/O for 5 seconds... 00:10:55.311 [2024-07-23 17:59:01.388720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.388741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.403105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.403133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.413879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.413906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.424223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.424250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.435253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.435280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.448133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.448161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.458808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.458835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.469500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.469528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.480541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.480570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.491598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.491640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.504203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.504230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.514588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.514630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.525108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.525142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.535720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.535749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.546328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.546356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.556940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.556968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.567539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.567566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.578215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.578241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.591101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.591127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.601235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.601261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.611907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.611933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.621948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.621974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.632596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.632637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.643071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.643097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.655665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.655692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.665536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.665563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.675991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.676017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.686403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.686430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.697009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.697035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.709184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.709210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.718843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.718869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.729697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.729733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.742350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.742377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.752165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.752192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.763223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.763248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.774052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.774078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.784740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.784766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.795330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.795356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.806147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.806173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.818173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.818199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.829430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.829458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.838447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.838475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.849243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.849271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.859480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.859522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.870012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.870039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.880350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.880377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.890775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.890802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.900965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.311 [2024-07-23 17:59:01.900993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.311 [2024-07-23 17:59:01.911378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.911405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.922339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.922367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.932891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.932925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.944994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.945020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.955138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.955165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.965489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.965516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.975918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.975944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.988563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.988590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:01.998611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:01.998643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.009332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.009358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.019732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.019758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.029681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.029707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.039902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.039928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.050414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.050441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.060613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.060639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.071065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.071091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.081432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.081460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.092277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.092327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.102967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.102992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.115352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.115378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.126869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.126895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.136008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.136044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.146864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.146890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.157138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.157165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.167546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.167572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.177904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.177930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.188214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.188239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.198581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.198622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.209107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.209133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.219768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.219794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.230025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.230051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.242188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.242213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.251469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.251497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.262355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.262382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.272546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.272574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.283094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.283120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.293206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.293231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.303485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.303513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.313686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.313711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.323964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.323990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.334510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.334546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.346832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.346857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.355868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.355894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.368418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.368445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.378474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.378502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.388855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.388881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.399048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.399073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.409344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.312 [2024-07-23 17:59:02.409371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.312 [2024-07-23 17:59:02.419392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.419419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.429722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.429748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.440484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.440510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.451220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.451245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.463575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.463601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.473585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.473630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.484083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.484109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.495072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.495098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.505404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.505431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.515838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.515864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.526020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.526045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.536167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.536193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.546547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.546575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.556992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.557018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.567366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.567392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.577886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.577912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.590722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.590760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.600755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.600782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.611773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.611800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.624427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.624455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.635859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.635886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.645097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.645124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.657138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.657166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.667852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.667879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.678404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.678432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.689202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.689228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.700004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.700031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.710851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.710893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.723463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.723491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.733851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.733878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.744674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.744701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.757419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.757458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.767715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.767742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.778426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.778462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.789693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.789720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.802491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.802518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.812715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.812743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.823388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.823415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.833989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.834017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.844130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.844157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.854564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.854606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.865148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.865175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.876017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.876044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.888502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.888545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.897731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.897757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.313 [2024-07-23 17:59:02.910809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.313 [2024-07-23 17:59:02.910835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.314 [2024-07-23 17:59:02.920675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.314 [2024-07-23 17:59:02.920700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.314 [2024-07-23 17:59:02.931576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.314 [2024-07-23 17:59:02.931618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.314 [2024-07-23 17:59:02.944207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.314 [2024-07-23 17:59:02.944233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.314 [2024-07-23 17:59:02.953931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.314 [2024-07-23 17:59:02.953956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.314 [2024-07-23 17:59:02.965380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.314 [2024-07-23 17:59:02.965407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.572 [2024-07-23 17:59:02.976126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.572 [2024-07-23 17:59:02.976154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.572 [2024-07-23 17:59:02.986864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.572 [2024-07-23 17:59:02.986892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.572 [2024-07-23 17:59:02.999426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.572 [2024-07-23 17:59:02.999454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.572 [2024-07-23 17:59:03.009838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.009865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.020696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.020723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.033193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.033219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.042888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.042914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.053832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.053858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.064862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.064888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.077347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.077375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.088080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.088107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.098930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.098956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.111739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.111765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.122018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.122045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.133092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.133119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.145611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.145637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.155892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.155927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.166850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.166877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.179186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.179227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.190761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.190787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.200404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.200437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.211702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.211743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.573 [2024-07-23 17:59:03.224255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.573 [2024-07-23 17:59:03.224280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.235686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.235728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.245028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.245053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.256408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.256435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.267153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.267178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.277857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.277883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.290522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.290549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.301006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.301031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.311543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.311570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.324128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.324153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.333934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.333960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.344664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.344690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.355437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.355464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.367781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.367817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.377526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.377552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.388110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.388135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.398577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.398623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.408828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.408854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.419252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.419278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.429895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.429920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.442538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.442564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.452679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.452705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.463221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.463247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.473477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.473503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.832 [2024-07-23 17:59:03.484081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.832 [2024-07-23 17:59:03.484108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.494435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.494463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.504626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.504652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.514837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.514863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.525362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.525388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.537953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.537978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.548075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.548101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.558382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.558408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.569097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.569130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.579563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.579605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.589900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.589925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.600574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.600617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.610951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.610977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.621327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.621352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.631654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.631694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.642634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.642674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.655186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.655212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.665434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.665462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.676252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.676277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.688920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.688946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.699462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.699490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.710103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.710129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.722707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.722732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.734323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.734351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.091 [2024-07-23 17:59:03.743500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.091 [2024-07-23 17:59:03.743528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.349 [2024-07-23 17:59:03.755405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.349 [2024-07-23 17:59:03.755433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.349 [2024-07-23 17:59:03.768196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.349 [2024-07-23 17:59:03.768222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.349 [2024-07-23 17:59:03.778380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.349 [2024-07-23 17:59:03.778414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.349 [2024-07-23 17:59:03.788770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.349 [2024-07-23 17:59:03.788797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.349 [2024-07-23 17:59:03.799308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.349 [2024-07-23 17:59:03.799345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.809997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.810023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.820539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.820566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.832822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.832849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.843220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.843245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.853709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.853735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.864311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.864346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.875092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.875118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.888740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.888766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.899419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.899450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.909908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.909935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.922512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.922540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.932810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.932835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.943183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.943209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.953646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.953672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.964313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.964361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.975398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.975425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.987998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.988032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:03.997863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:03.997889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.350 [2024-07-23 17:59:04.008674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.350 [2024-07-23 17:59:04.008700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.019409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.019437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.029617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.029643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.040499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.040526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.052798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.052824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.064270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.064310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.073168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.073194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.084194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.084220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.096634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.096676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.105817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.105844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.118526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.118554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.130130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.130157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.138911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.138938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.150361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.150397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.163069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.163096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.174715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.174744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.183900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.183926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.195382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.195409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.205985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.206012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.216849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.216876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.229311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.229348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.239565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.239617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.250274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.250314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.609 [2024-07-23 17:59:04.260824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.609 [2024-07-23 17:59:04.260851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.271562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.271590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.284117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.284142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.294689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.294715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.305399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.305426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.317561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.317588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.327107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.327134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.340600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.340628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.352449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.352476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.361913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.361939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.373465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.373492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.385700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.385740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.395514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.395542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.405662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.405703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.416226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.416252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.426697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.426724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.437181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.437222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.868 [2024-07-23 17:59:04.447576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.868 [2024-07-23 17:59:04.447623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.457926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.457952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.467764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.467790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.478183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.478209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.488472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.488499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.498894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.498920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.509056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.509082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.869 [2024-07-23 17:59:04.519383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.869 [2024-07-23 17:59:04.519410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.530123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.530149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.541333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.541362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.551987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.552012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.564418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.564445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.574276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.574328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.584606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.584632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.594998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.595024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.605443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.605469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.616022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.616048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.628739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.628766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.638527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.638555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.650926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.650951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.661048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.661075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.671215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.671241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.681826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.681851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.692628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.692655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.703056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.703081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.715017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.715043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.724114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.724140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.736923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.736950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.747069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.747096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.757282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.757332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.767907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.767933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.127 [2024-07-23 17:59:04.781359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.127 [2024-07-23 17:59:04.781385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.791544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.791572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.802630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.802665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.814988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.815013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.825019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.825045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.835294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.835343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.845941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.845966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.858434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.858460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.868708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.868734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.879250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.879277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.889938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.889964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.900701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.900742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.913159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.913185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.386 [2024-07-23 17:59:04.923469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.386 [2024-07-23 17:59:04.923495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.934085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.934111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.946439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.946465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.956631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.956658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.967414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.967442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.978028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.978055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.990214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.990240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:04.999935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:04.999961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:05.010070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:05.010104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:05.020746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:05.020785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:05.031219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:05.031245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.387 [2024-07-23 17:59:05.041942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.387 [2024-07-23 17:59:05.041969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.645 [2024-07-23 17:59:05.054201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.645 [2024-07-23 17:59:05.054227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.645 [2024-07-23 17:59:05.064435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.064469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.075046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.075073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.085650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.085691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.096036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.096061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.106285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.106334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.116723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.116749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.126912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.126938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.137466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.137493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.147901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.147927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.158646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.158673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.171191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.171216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.181405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.181433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.192097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.192123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.204258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.204284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.214238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.214272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.224624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.224650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.235139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.235165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.247536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.247578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.257884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.257910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.268180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.268207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.278576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.278618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.289391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.289419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.646 [2024-07-23 17:59:05.301901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.646 [2024-07-23 17:59:05.301927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.312386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.312414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.322860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.322887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.333340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.333368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.344000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.344028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.354557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.354584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.364849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.364876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.375329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.375358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.386064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.386089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.396961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.397002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.407979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.408005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.420332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.420364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.430754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.430795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.441278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.441328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.451994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.452023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.462579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.462620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.473266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.473291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.483974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.484000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.494403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.494430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.505493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.505520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.516417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.516443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.528622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.528649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.538511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.538538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.549237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.905 [2024-07-23 17:59:05.549262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.905 [2024-07-23 17:59:05.561755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.906 [2024-07-23 17:59:05.561782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.572196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.572223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.582982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.583008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.595255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.595283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.605652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.605679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.616721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.616748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.629362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.629407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.639099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.639127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.650141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.650169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.660646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.660673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.670863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.670891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.681623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.681649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.694053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.694080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.704282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.704331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.714399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.714426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.724946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.724971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.735203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.735230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.745326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.745353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.756008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.756034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.768121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.768147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.784474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.784502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.796050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.796075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.805099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.805125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.165 [2024-07-23 17:59:05.816848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.165 [2024-07-23 17:59:05.816874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.829157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.829183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.839137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.839163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.849926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.849951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.860444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.860471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.871028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.871055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.881569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.881611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.892228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.892254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.902405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.902431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.912805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.912830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.923281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.923329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.933552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.933578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.944202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.424 [2024-07-23 17:59:05.944227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.424 [2024-07-23 17:59:05.954703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:05.954729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:05.965266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:05.965306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:05.975714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:05.975740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:05.986482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:05.986508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:05.998810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:05.998835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.010502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.010529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.019426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.019466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.030742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.030768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.042953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.042978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.052873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.052899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.063468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.063495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.425 [2024-07-23 17:59:06.074124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.425 [2024-07-23 17:59:06.074151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.085181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.085207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.098148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.098174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.108483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.108510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.119104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.119130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.131924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.131950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.141902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.141928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.152564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.152592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.162846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.162872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.173020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.173046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.183423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.183450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.193805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.193832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.204468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.204495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.217202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.217228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.227480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.227519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.238007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.238034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.248648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.248689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.259665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.259693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.271736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.271762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.281190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.281216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.292460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.292488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.304882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.304909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.315217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.315244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.326424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.326450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.684 [2024-07-23 17:59:06.337344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.684 [2024-07-23 17:59:06.337372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.348144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.348170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.358785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.358811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.369389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.369415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.382052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.382078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.392218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.392246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.402700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.402727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.409698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.409740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 00:10:58.943 Latency(us) 00:10:58.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.943 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:58.943 Nvme1n1 : 5.01 12019.42 93.90 0.00 0.00 10635.63 4490.43 22524.97 00:10:58.943 =================================================================================================================== 00:10:58.943 Total : 12019.42 93.90 0.00 0.00 10635.63 4490.43 22524.97 00:10:58.943 [2024-07-23 17:59:06.414860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.414888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.422884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.422907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.430921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.430959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.943 [2024-07-23 17:59:06.438972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.943 [2024-07-23 17:59:06.439020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.446990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.447036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.455005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.455053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.463030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.463078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.471055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.471101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.479072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.479119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.487098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.487146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.495123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.495169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.503141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.503188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.511162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.511208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.519188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.519237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.527205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.527250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.535206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.535245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.543196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.543219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.551261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.551304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.559287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.559349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.567314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.567363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.575279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.575322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.583349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.583388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.591380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.591425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.944 [2024-07-23 17:59:06.599419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.944 [2024-07-23 17:59:06.599457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.203 [2024-07-23 17:59:06.607389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.203 [2024-07-23 17:59:06.607412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.203 [2024-07-23 17:59:06.615413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.203 [2024-07-23 17:59:06.615436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2266985) - No such process 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2266985 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.203 delay0 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.203 17:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:59.203 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.203 [2024-07-23 17:59:06.726766] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:05.764 Initializing NVMe Controllers 00:11:05.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:05.764 Initialization complete. Launching workers. 00:11:05.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 86 00:11:05.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 373, failed to submit 33 00:11:05.764 success 196, unsuccess 177, failed 0 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.764 rmmod nvme_tcp 00:11:05.764 rmmod nvme_fabrics 00:11:05.764 rmmod nvme_keyring 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2265646 ']' 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2265646 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2265646 ']' 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2265646 00:11:05.764 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2265646 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2265646' 00:11:05.765 killing process with pid 2265646 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2265646 00:11:05.765 17:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2265646 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.765 17:59:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.670 00:11:07.670 real 0m27.706s 00:11:07.670 user 0m38.845s 00:11:07.670 sys 0m9.087s 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:07.670 ************************************ 00:11:07.670 END TEST nvmf_zcopy 00:11:07.670 ************************************ 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.670 ************************************ 00:11:07.670 START TEST nvmf_nmic 00:11:07.670 ************************************ 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:07.670 * Looking for test storage... 00:11:07.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.670 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.671 17:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:10.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:10.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:10.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:10.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.205 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:10.206 00:11:10.206 --- 10.0.0.2 ping statistics --- 00:11:10.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.206 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:11:10.206 00:11:10.206 --- 10.0.0.1 ping statistics --- 00:11:10.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.206 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2270261 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2270261 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2270261 ']' 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.206 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.206 [2024-07-23 17:59:17.669837] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:11:10.206 [2024-07-23 17:59:17.669920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.206 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.206 [2024-07-23 17:59:17.739171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.206 [2024-07-23 17:59:17.830355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.206 [2024-07-23 17:59:17.830435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.206 [2024-07-23 17:59:17.830449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.206 [2024-07-23 17:59:17.830460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.206 [2024-07-23 17:59:17.830470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.206 [2024-07-23 17:59:17.830519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.206 [2024-07-23 17:59:17.830590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.206 [2024-07-23 17:59:17.830612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.206 [2024-07-23 17:59:17.830617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 [2024-07-23 17:59:17.986823] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 Malloc0 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 [2024-07-23 17:59:18.040471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:10.464 test case1: single bdev can't be used in multiple subsystems 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:10.464 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.465 [2024-07-23 17:59:18.064261] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:10.465 [2024-07-23 17:59:18.064289] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:10.465 [2024-07-23 17:59:18.064327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.465 request: 00:11:10.465 { 00:11:10.465 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:10.465 "namespace": { 00:11:10.465 "bdev_name": "Malloc0", 00:11:10.465 "no_auto_visible": false 00:11:10.465 }, 00:11:10.465 "method": "nvmf_subsystem_add_ns", 00:11:10.465 "req_id": 1 00:11:10.465 } 00:11:10.465 Got JSON-RPC error response 00:11:10.465 response: 00:11:10.465 { 00:11:10.465 "code": -32602, 00:11:10.465 "message": "Invalid parameters" 00:11:10.465 } 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:10.465 Adding namespace failed - expected result. 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:10.465 test case2: host connect to nvmf target in multiple paths 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.465 [2024-07-23 17:59:18.072395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.465 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.032 17:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:11.964 17:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.964 17:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:11.964 17:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.964 17:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:11.964 17:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:13.861 17:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.861 [global] 00:11:13.861 thread=1 00:11:13.861 invalidate=1 00:11:13.861 rw=write 00:11:13.861 time_based=1 00:11:13.861 runtime=1 00:11:13.861 ioengine=libaio 00:11:13.861 direct=1 00:11:13.861 bs=4096 00:11:13.861 iodepth=1 00:11:13.861 norandommap=0 00:11:13.861 numjobs=1 00:11:13.861 00:11:13.861 verify_dump=1 00:11:13.861 verify_backlog=512 00:11:13.861 verify_state_save=0 00:11:13.861 do_verify=1 00:11:13.861 verify=crc32c-intel 00:11:13.861 [job0] 00:11:13.861 filename=/dev/nvme0n1 00:11:13.861 Could not set queue depth (nvme0n1) 00:11:14.119 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.119 fio-3.35 00:11:14.119 Starting 1 thread 00:11:15.491 00:11:15.491 job0: (groupid=0, jobs=1): err= 0: pid=2270898: Tue Jul 23 17:59:22 2024 00:11:15.491 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:15.491 slat (nsec): min=8158, max=62687, avg=15220.73, stdev=5486.24 00:11:15.491 clat (usec): min=196, max=41021, avg=1508.11, stdev=7093.19 00:11:15.491 lat (usec): min=205, max=41050, avg=1523.33, stdev=7094.50 00:11:15.491 clat percentiles (usec): 00:11:15.491 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 225], 00:11:15.491 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 237], 00:11:15.491 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:11:15.491 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.491 | 99.99th=[41157] 00:11:15.491 write: IOPS=819, BW=3277KiB/s (3355kB/s)(3280KiB/1001msec); 0 zone resets 00:11:15.491 slat (usec): min=10, max=32407, avg=60.72, stdev=1130.99 00:11:15.491 clat (usec): min=140, max=488, avg=198.72, stdev=45.94 00:11:15.491 lat (usec): min=157, max=32831, avg=259.44, stdev=1139.82 00:11:15.491 clat percentiles (usec): 00:11:15.491 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:11:15.491 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:11:15.491 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 310], 00:11:15.491 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 490], 99.95th=[ 490], 00:11:15.491 | 99.99th=[ 490] 00:11:15.491 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.491 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.491 lat (usec) : 250=90.54%, 500=8.26% 00:11:15.491 lat (msec) : 50=1.20% 00:11:15.491 cpu : usr=2.60%, sys=2.70%, ctx=1334, majf=0, minf=2 00:11:15.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.491 issued rwts: total=512,820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.491 00:11:15.491 Run status group 0 (all jobs): 00:11:15.491 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:11:15.491 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=3280KiB (3359kB), run=1001-1001msec 00:11:15.491 00:11:15.491 Disk stats (read/write): 00:11:15.491 nvme0n1: ios=415/512, merge=0/0, ticks=1704/93, in_queue=1797, util=98.80% 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.491 rmmod nvme_tcp 00:11:15.491 rmmod nvme_fabrics 00:11:15.491 rmmod nvme_keyring 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2270261 ']' 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2270261 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2270261 ']' 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2270261 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2270261 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2270261' 00:11:15.491 killing process with pid 2270261 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2270261 00:11:15.491 17:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2270261 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.750 17:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.682 00:11:17.682 real 0m10.043s 00:11:17.682 user 0m22.510s 00:11:17.682 sys 0m2.420s 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 ************************************ 00:11:17.682 END TEST nvmf_nmic 00:11:17.682 ************************************ 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.682 17:59:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.941 ************************************ 00:11:17.941 START TEST nvmf_fio_target 00:11:17.941 ************************************ 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.941 * Looking for test storage... 00:11:17.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.941 17:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:20.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:20.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:20.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:20.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:20.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:11:20.474 00:11:20.474 --- 10.0.0.2 ping statistics --- 00:11:20.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.474 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:20.474 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:11:20.475 00:11:20.475 --- 10.0.0.1 ping statistics --- 00:11:20.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.475 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2272983 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2272983 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2272983 ']' 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.475 17:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.475 [2024-07-23 17:59:27.737309] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:11:20.475 [2024-07-23 17:59:27.737414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.475 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.475 [2024-07-23 17:59:27.801861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.475 [2024-07-23 17:59:27.882553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.475 [2024-07-23 17:59:27.882608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.475 [2024-07-23 17:59:27.882631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.475 [2024-07-23 17:59:27.882642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.475 [2024-07-23 17:59:27.882651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.475 [2024-07-23 17:59:27.882744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.475 [2024-07-23 17:59:27.882807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.475 [2024-07-23 17:59:27.882873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.475 [2024-07-23 17:59:27.882876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.475 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:20.733 [2024-07-23 17:59:28.260643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.733 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.991 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:20.991 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.249 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:21.249 17:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.507 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:21.507 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.765 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:21.765 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:22.022 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.279 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:22.280 17:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.537 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:22.537 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.795 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:22.795 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:23.053 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.310 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:23.310 17:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.568 17:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:23.568 17:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:23.825 17:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.083 [2024-07-23 17:59:31.616108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.083 17:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:24.341 17:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:24.598 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:25.165 17:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:27.687 17:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:27.687 [global] 00:11:27.687 thread=1 00:11:27.687 invalidate=1 00:11:27.687 rw=write 00:11:27.687 time_based=1 00:11:27.687 runtime=1 00:11:27.687 ioengine=libaio 00:11:27.687 direct=1 00:11:27.687 bs=4096 00:11:27.687 iodepth=1 00:11:27.687 norandommap=0 00:11:27.687 numjobs=1 00:11:27.687 00:11:27.687 verify_dump=1 00:11:27.687 verify_backlog=512 00:11:27.687 verify_state_save=0 00:11:27.687 do_verify=1 00:11:27.687 verify=crc32c-intel 00:11:27.687 [job0] 00:11:27.687 filename=/dev/nvme0n1 00:11:27.687 [job1] 00:11:27.687 filename=/dev/nvme0n2 00:11:27.687 [job2] 00:11:27.687 filename=/dev/nvme0n3 00:11:27.687 [job3] 00:11:27.687 filename=/dev/nvme0n4 00:11:27.687 Could not set queue depth (nvme0n1) 00:11:27.687 Could not set queue depth (nvme0n2) 00:11:27.687 Could not set queue depth (nvme0n3) 00:11:27.687 Could not set queue depth (nvme0n4) 00:11:27.687 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.687 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.687 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.687 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.687 fio-3.35 00:11:27.687 Starting 4 threads 00:11:29.057 00:11:29.057 job0: (groupid=0, jobs=1): err= 0: pid=2274060: Tue Jul 23 17:59:36 2024 00:11:29.057 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:11:29.057 slat (nsec): min=9807, max=36180, avg=25263.35, stdev=9184.82 00:11:29.057 clat (usec): min=40851, max=41062, avg=40964.12, stdev=40.52 00:11:29.057 lat (usec): min=40861, max=41080, avg=40989.38, stdev=40.87 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:29.057 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:29.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:29.057 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:29.057 | 99.99th=[41157] 00:11:29.057 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:11:29.057 slat (nsec): min=7066, max=42486, avg=8678.17, stdev=3101.64 00:11:29.057 clat (usec): min=134, max=289, avg=163.92, stdev=16.74 00:11:29.057 lat (usec): min=143, max=299, avg=172.59, stdev=17.16 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:11:29.057 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:11:29.057 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:11:29.057 | 99.00th=[ 221], 99.50th=[ 243], 99.90th=[ 289], 99.95th=[ 289], 00:11:29.057 | 99.99th=[ 289] 00:11:29.057 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:11:29.057 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:29.057 lat (usec) : 250=95.33%, 500=0.37% 00:11:29.057 lat (msec) : 50=4.30% 00:11:29.057 cpu : usr=0.19%, sys=0.68%, ctx=535, majf=0, minf=1 00:11:29.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.057 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.057 job1: (groupid=0, jobs=1): err= 0: pid=2274061: Tue Jul 23 17:59:36 2024 00:11:29.057 read: IOPS=1677, BW=6709KiB/s (6870kB/s)(6716KiB/1001msec) 00:11:29.057 slat (nsec): min=6106, max=63569, avg=15170.65, stdev=5783.80 00:11:29.057 clat (usec): min=189, max=651, avg=272.62, stdev=76.36 00:11:29.057 lat (usec): min=207, max=666, avg=287.79, stdev=78.28 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:11:29.057 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:29.057 | 70.00th=[ 253], 80.00th=[ 293], 90.00th=[ 400], 95.00th=[ 461], 00:11:29.057 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 652], 00:11:29.057 | 99.99th=[ 652] 00:11:29.057 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:29.057 slat (usec): min=7, max=23693, avg=31.06, stdev=523.18 00:11:29.057 clat (usec): min=133, max=1039, avg=210.29, stdev=57.48 00:11:29.057 lat (usec): min=143, max=24101, avg=241.35, stdev=530.58 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 174], 00:11:29.057 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 206], 00:11:29.057 | 70.00th=[ 223], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 306], 00:11:29.057 | 99.00th=[ 404], 99.50th=[ 437], 99.90th=[ 807], 99.95th=[ 824], 00:11:29.057 | 99.99th=[ 1037] 00:11:29.057 bw ( KiB/s): min= 8192, max= 8192, per=59.03%, avg=8192.00, stdev= 0.00, samples=1 00:11:29.057 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:29.057 lat (usec) : 250=74.32%, 500=24.36%, 750=1.23%, 1000=0.05% 00:11:29.057 lat (msec) : 2=0.03% 00:11:29.057 cpu : usr=4.60%, sys=9.00%, ctx=3729, majf=0, minf=2 00:11:29.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.057 issued rwts: total=1679,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.057 job2: (groupid=0, jobs=1): err= 0: pid=2274062: Tue Jul 23 17:59:36 2024 00:11:29.057 read: IOPS=458, BW=1834KiB/s (1878kB/s)(1836KiB/1001msec) 00:11:29.057 slat (nsec): min=6078, max=68517, avg=23629.62, stdev=11152.99 00:11:29.057 clat (usec): min=227, max=41251, avg=1846.08, stdev=7446.00 00:11:29.057 lat (usec): min=240, max=41265, avg=1869.71, stdev=7446.19 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[ 265], 5.00th=[ 318], 10.00th=[ 343], 20.00th=[ 371], 00:11:29.057 | 30.00th=[ 392], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 461], 00:11:29.057 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 570], 00:11:29.057 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:29.057 | 99.99th=[41157] 00:11:29.057 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:29.057 slat (nsec): min=6269, max=40248, avg=12469.65, stdev=5567.77 00:11:29.057 clat (usec): min=165, max=471, avg=246.75, stdev=59.85 00:11:29.057 lat (usec): min=172, max=512, avg=259.22, stdev=60.43 00:11:29.057 clat percentiles (usec): 00:11:29.057 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:11:29.057 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:11:29.057 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 371], 95.00th=[ 379], 00:11:29.057 | 99.00th=[ 400], 99.50th=[ 449], 99.90th=[ 474], 99.95th=[ 474], 00:11:29.057 | 99.99th=[ 474] 00:11:29.058 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:11:29.058 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:29.058 lat (usec) : 250=34.29%, 500=58.19%, 750=5.46%, 1000=0.31% 00:11:29.058 lat (msec) : 2=0.10%, 50=1.65% 00:11:29.058 cpu : usr=0.60%, sys=2.10%, ctx=973, majf=0, minf=1 00:11:29.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.058 issued rwts: total=459,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.058 job3: (groupid=0, jobs=1): err= 0: pid=2274063: Tue Jul 23 17:59:36 2024 00:11:29.058 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:11:29.058 slat (nsec): min=7779, max=38872, avg=27049.91, stdev=10628.14 00:11:29.058 clat (usec): min=40898, max=41086, avg=40969.03, stdev=41.75 00:11:29.058 lat (usec): min=40934, max=41094, avg=40996.08, stdev=33.55 00:11:29.058 clat percentiles (usec): 00:11:29.058 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:29.058 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:29.058 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:29.058 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:29.058 | 99.99th=[41157] 00:11:29.058 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:11:29.058 slat (nsec): min=7672, max=34470, avg=10704.75, stdev=4694.58 00:11:29.058 clat (usec): min=156, max=525, avg=186.35, stdev=29.89 00:11:29.058 lat (usec): min=165, max=536, avg=197.05, stdev=31.07 00:11:29.058 clat percentiles (usec): 00:11:29.058 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:11:29.058 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:29.058 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:11:29.058 | 99.00th=[ 253], 99.50th=[ 482], 99.90th=[ 529], 99.95th=[ 529], 00:11:29.058 | 99.99th=[ 529] 00:11:29.058 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:11:29.058 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:29.058 lat (usec) : 250=94.76%, 500=0.94%, 750=0.19% 00:11:29.058 lat (msec) : 50=4.12% 00:11:29.058 cpu : usr=0.10%, sys=0.99%, ctx=535, majf=0, minf=1 00:11:29.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.058 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.058 00:11:29.058 Run status group 0 (all jobs): 00:11:29.058 READ: bw=8453KiB/s (8656kB/s), 87.2KiB/s-6709KiB/s (89.3kB/s-6870kB/s), io=8732KiB (8942kB), run=1001-1033msec 00:11:29.058 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-8184KiB/s (2030kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:11:29.058 00:11:29.058 Disk stats (read/write): 00:11:29.058 nvme0n1: ios=68/512, merge=0/0, ticks=763/81, in_queue=844, util=86.47% 00:11:29.058 nvme0n2: ios=1580/1717, merge=0/0, ticks=560/327, in_queue=887, util=89.93% 00:11:29.058 nvme0n3: ios=159/512, merge=0/0, ticks=1612/122, in_queue=1734, util=93.10% 00:11:29.058 nvme0n4: ios=41/512, merge=0/0, ticks=1636/98, in_queue=1734, util=93.78% 00:11:29.058 17:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:29.058 [global] 00:11:29.058 thread=1 00:11:29.058 invalidate=1 00:11:29.058 rw=randwrite 00:11:29.058 time_based=1 00:11:29.058 runtime=1 00:11:29.058 ioengine=libaio 00:11:29.058 direct=1 00:11:29.058 bs=4096 00:11:29.058 iodepth=1 00:11:29.058 norandommap=0 00:11:29.058 numjobs=1 00:11:29.058 00:11:29.058 verify_dump=1 00:11:29.058 verify_backlog=512 00:11:29.058 verify_state_save=0 00:11:29.058 do_verify=1 00:11:29.058 verify=crc32c-intel 00:11:29.058 [job0] 00:11:29.058 filename=/dev/nvme0n1 00:11:29.058 [job1] 00:11:29.058 filename=/dev/nvme0n2 00:11:29.058 [job2] 00:11:29.058 filename=/dev/nvme0n3 00:11:29.058 [job3] 00:11:29.058 filename=/dev/nvme0n4 00:11:29.058 Could not set queue depth (nvme0n1) 00:11:29.058 Could not set queue depth (nvme0n2) 00:11:29.058 Could not set queue depth (nvme0n3) 00:11:29.058 Could not set queue depth (nvme0n4) 00:11:29.058 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.058 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.058 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.058 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.058 fio-3.35 00:11:29.058 Starting 4 threads 00:11:30.428 00:11:30.428 job0: (groupid=0, jobs=1): err= 0: pid=2274287: Tue Jul 23 17:59:37 2024 00:11:30.428 read: IOPS=1738, BW=6953KiB/s (7120kB/s)(6960KiB/1001msec) 00:11:30.428 slat (nsec): min=5689, max=66255, avg=15851.48, stdev=7470.46 00:11:30.428 clat (usec): min=196, max=3028, avg=295.29, stdev=118.00 00:11:30.428 lat (usec): min=202, max=3037, avg=311.14, stdev=120.18 00:11:30.428 clat percentiles (usec): 00:11:30.428 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 241], 00:11:30.428 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:11:30.428 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 420], 95.00th=[ 490], 00:11:30.428 | 99.00th=[ 553], 99.50th=[ 644], 99.90th=[ 1811], 99.95th=[ 3032], 00:11:30.428 | 99.99th=[ 3032] 00:11:30.428 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:30.428 slat (nsec): min=7179, max=59836, avg=18385.80, stdev=6936.15 00:11:30.428 clat (usec): min=143, max=359, avg=196.19, stdev=28.77 00:11:30.428 lat (usec): min=151, max=369, avg=214.57, stdev=30.08 00:11:30.428 clat percentiles (usec): 00:11:30.428 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 174], 00:11:30.428 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:11:30.428 | 70.00th=[ 204], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 255], 00:11:30.428 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 326], 00:11:30.428 | 99.99th=[ 359] 00:11:30.428 bw ( KiB/s): min= 8192, max= 8192, per=40.28%, avg=8192.00, stdev= 0.00, samples=1 00:11:30.428 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:30.428 lat (usec) : 250=63.86%, 500=34.00%, 750=1.93%, 1000=0.11% 00:11:30.428 lat (msec) : 2=0.08%, 4=0.03% 00:11:30.428 cpu : usr=5.80%, sys=7.90%, ctx=3791, majf=0, minf=2 00:11:30.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.428 issued rwts: total=1740,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.428 job1: (groupid=0, jobs=1): err= 0: pid=2274288: Tue Jul 23 17:59:37 2024 00:11:30.428 read: IOPS=237, BW=951KiB/s (974kB/s)(952KiB/1001msec) 00:11:30.428 slat (nsec): min=6410, max=38558, avg=13181.60, stdev=7688.61 00:11:30.428 clat (usec): min=219, max=41339, avg=3696.01, stdev=11040.17 00:11:30.428 lat (usec): min=226, max=41349, avg=3709.19, stdev=11044.13 00:11:30.428 clat percentiles (usec): 00:11:30.428 | 1.00th=[ 225], 5.00th=[ 255], 10.00th=[ 273], 20.00th=[ 293], 00:11:30.428 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 371], 00:11:30.428 | 70.00th=[ 388], 80.00th=[ 412], 90.00th=[ 627], 95.00th=[41157], 00:11:30.428 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:30.428 | 99.99th=[41157] 00:11:30.428 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:30.428 slat (usec): min=7, max=7531, avg=31.28, stdev=332.24 00:11:30.428 clat (usec): min=148, max=265, avg=191.63, stdev=18.85 00:11:30.428 lat (usec): min=158, max=7743, avg=222.91, stdev=333.83 00:11:30.428 clat percentiles (usec): 00:11:30.428 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:11:30.428 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:11:30.428 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:11:30.428 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 265], 99.95th=[ 265], 00:11:30.428 | 99.99th=[ 265] 00:11:30.428 bw ( KiB/s): min= 4096, max= 4096, per=20.14%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.428 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.428 lat (usec) : 250=69.47%, 500=26.80%, 750=0.67%, 1000=0.27% 00:11:30.428 lat (msec) : 10=0.13%, 50=2.67% 00:11:30.428 cpu : usr=0.50%, sys=1.70%, ctx=752, majf=0, minf=1 00:11:30.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 issued rwts: total=238,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.429 job2: (groupid=0, jobs=1): err= 0: pid=2274295: Tue Jul 23 17:59:37 2024 00:11:30.429 read: IOPS=267, BW=1069KiB/s (1094kB/s)(1076KiB/1007msec) 00:11:30.429 slat (nsec): min=5700, max=51820, avg=16235.86, stdev=8723.59 00:11:30.429 clat (usec): min=229, max=41056, avg=3196.86, stdev=10335.15 00:11:30.429 lat (usec): min=249, max=41089, avg=3213.09, stdev=10337.78 00:11:30.429 clat percentiles (usec): 00:11:30.429 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 265], 00:11:30.429 | 30.00th=[ 273], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 326], 00:11:30.429 | 70.00th=[ 351], 80.00th=[ 396], 90.00th=[ 545], 95.00th=[41157], 00:11:30.429 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:30.429 | 99.99th=[41157] 00:11:30.429 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:30.429 slat (nsec): min=6670, max=52820, avg=16316.60, stdev=7986.88 00:11:30.429 clat (usec): min=173, max=843, avg=254.75, stdev=62.72 00:11:30.429 lat (usec): min=180, max=851, avg=271.07, stdev=61.86 00:11:30.429 clat percentiles (usec): 00:11:30.429 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 212], 00:11:30.429 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 253], 00:11:30.429 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 375], 00:11:30.429 | 99.00th=[ 420], 99.50th=[ 586], 99.90th=[ 840], 99.95th=[ 840], 00:11:30.429 | 99.99th=[ 840] 00:11:30.429 bw ( KiB/s): min= 4096, max= 4096, per=20.14%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.429 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.429 lat (usec) : 250=40.72%, 500=55.19%, 750=0.77%, 1000=0.26% 00:11:30.429 lat (msec) : 2=0.38%, 4=0.26%, 50=2.43% 00:11:30.429 cpu : usr=0.99%, sys=0.89%, ctx=783, majf=0, minf=1 00:11:30.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.429 job3: (groupid=0, jobs=1): err= 0: pid=2274296: Tue Jul 23 17:59:37 2024 00:11:30.429 read: IOPS=1736, BW=6945KiB/s (7112kB/s)(6952KiB/1001msec) 00:11:30.429 slat (nsec): min=5692, max=54456, avg=14510.84, stdev=5412.85 00:11:30.429 clat (usec): min=222, max=892, avg=280.61, stdev=44.27 00:11:30.429 lat (usec): min=229, max=900, avg=295.12, stdev=46.46 00:11:30.429 clat percentiles (usec): 00:11:30.429 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:11:30.429 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:30.429 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 338], 00:11:30.429 | 99.00th=[ 457], 99.50th=[ 519], 99.90th=[ 807], 99.95th=[ 889], 00:11:30.429 | 99.99th=[ 889] 00:11:30.429 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:30.429 slat (nsec): min=7334, max=61364, avg=18866.07, stdev=7236.10 00:11:30.429 clat (usec): min=153, max=448, avg=210.24, stdev=40.32 00:11:30.429 lat (usec): min=164, max=474, avg=229.11, stdev=41.52 00:11:30.429 clat percentiles (usec): 00:11:30.429 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:11:30.429 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:11:30.429 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 262], 95.00th=[ 289], 00:11:30.429 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 429], 00:11:30.429 | 99.99th=[ 449] 00:11:30.429 bw ( KiB/s): min= 8192, max= 8192, per=40.28%, avg=8192.00, stdev= 0.00, samples=1 00:11:30.429 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:30.429 lat (usec) : 250=53.72%, 500=45.99%, 750=0.24%, 1000=0.05% 00:11:30.429 cpu : usr=5.40%, sys=7.90%, ctx=3787, majf=0, minf=1 00:11:30.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.429 issued rwts: total=1738,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.429 00:11:30.429 Run status group 0 (all jobs): 00:11:30.429 READ: bw=15.5MiB/s (16.2MB/s), 951KiB/s-6953KiB/s (974kB/s-7120kB/s), io=15.6MiB (16.3MB), run=1001-1007msec 00:11:30.429 WRITE: bw=19.9MiB/s (20.8MB/s), 2034KiB/s-8184KiB/s (2083kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1007msec 00:11:30.429 00:11:30.429 Disk stats (read/write): 00:11:30.429 nvme0n1: ios=1586/1699, merge=0/0, ticks=1283/321, in_queue=1604, util=93.79% 00:11:30.429 nvme0n2: ios=147/512, merge=0/0, ticks=954/93, in_queue=1047, util=96.75% 00:11:30.429 nvme0n3: ios=172/512, merge=0/0, ticks=1307/128, in_queue=1435, util=96.76% 00:11:30.429 nvme0n4: ios=1593/1675, merge=0/0, ticks=938/331, in_queue=1269, util=97.89% 00:11:30.429 17:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:30.429 [global] 00:11:30.429 thread=1 00:11:30.429 invalidate=1 00:11:30.429 rw=write 00:11:30.429 time_based=1 00:11:30.429 runtime=1 00:11:30.429 ioengine=libaio 00:11:30.429 direct=1 00:11:30.429 bs=4096 00:11:30.429 iodepth=128 00:11:30.429 norandommap=0 00:11:30.429 numjobs=1 00:11:30.429 00:11:30.429 verify_dump=1 00:11:30.429 verify_backlog=512 00:11:30.429 verify_state_save=0 00:11:30.429 do_verify=1 00:11:30.429 verify=crc32c-intel 00:11:30.429 [job0] 00:11:30.429 filename=/dev/nvme0n1 00:11:30.429 [job1] 00:11:30.429 filename=/dev/nvme0n2 00:11:30.429 [job2] 00:11:30.429 filename=/dev/nvme0n3 00:11:30.429 [job3] 00:11:30.429 filename=/dev/nvme0n4 00:11:30.429 Could not set queue depth (nvme0n1) 00:11:30.429 Could not set queue depth (nvme0n2) 00:11:30.429 Could not set queue depth (nvme0n3) 00:11:30.429 Could not set queue depth (nvme0n4) 00:11:30.429 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.429 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.429 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.429 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.429 fio-3.35 00:11:30.429 Starting 4 threads 00:11:31.802 00:11:31.802 job0: (groupid=0, jobs=1): err= 0: pid=2274523: Tue Jul 23 17:59:39 2024 00:11:31.802 read: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(18.5MiB/1046msec) 00:11:31.802 slat (usec): min=2, max=12771, avg=98.77, stdev=674.25 00:11:31.802 clat (usec): min=3873, max=71075, avg=13607.43, stdev=8585.21 00:11:31.802 lat (usec): min=3878, max=71084, avg=13706.20, stdev=8618.31 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 6849], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9765], 00:11:31.802 | 30.00th=[10159], 40.00th=[11076], 50.00th=[12387], 60.00th=[12911], 00:11:31.802 | 70.00th=[13829], 80.00th=[14746], 90.00th=[17171], 95.00th=[19530], 00:11:31.802 | 99.00th=[64750], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:11:31.802 | 99.99th=[70779] 00:11:31.802 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:11:31.802 slat (usec): min=4, max=10933, avg=93.73, stdev=508.63 00:11:31.802 clat (usec): min=3115, max=71088, avg=13303.13, stdev=6355.89 00:11:31.802 lat (usec): min=3124, max=71098, avg=13396.86, stdev=6408.21 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 8291], 20.00th=[ 9765], 00:11:31.802 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[12125], 00:11:31.802 | 70.00th=[13042], 80.00th=[14746], 90.00th=[23200], 95.00th=[29230], 00:11:31.802 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:11:31.802 | 99.99th=[70779] 00:11:31.802 bw ( KiB/s): min=19584, max=21360, per=31.90%, avg=20472.00, stdev=1255.82, samples=2 00:11:31.802 iops : min= 4896, max= 5340, avg=5118.00, stdev=313.96, samples=2 00:11:31.802 lat (msec) : 4=0.24%, 10=23.33%, 20=67.56%, 50=7.59%, 100=1.28% 00:11:31.802 cpu : usr=7.08%, sys=10.33%, ctx=552, majf=0, minf=1 00:11:31.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:31.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.802 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.802 job1: (groupid=0, jobs=1): err= 0: pid=2274524: Tue Jul 23 17:59:39 2024 00:11:31.802 read: IOPS=3316, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1006msec) 00:11:31.802 slat (usec): min=2, max=13791, avg=146.90, stdev=886.22 00:11:31.802 clat (usec): min=958, max=50310, avg=19177.76, stdev=10418.21 00:11:31.802 lat (usec): min=3971, max=50322, avg=19324.67, stdev=10476.00 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10683], 00:11:31.802 | 30.00th=[11076], 40.00th=[12780], 50.00th=[16057], 60.00th=[19268], 00:11:31.802 | 70.00th=[24511], 80.00th=[26870], 90.00th=[34341], 95.00th=[41681], 00:11:31.802 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:11:31.802 | 99.99th=[50070] 00:11:31.802 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:31.802 slat (usec): min=3, max=23001, avg=132.13, stdev=983.94 00:11:31.802 clat (usec): min=2486, max=62893, avg=17349.92, stdev=9161.10 00:11:31.802 lat (usec): min=2504, max=62926, avg=17482.05, stdev=9234.80 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 5014], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[10552], 00:11:31.802 | 30.00th=[10814], 40.00th=[11600], 50.00th=[15533], 60.00th=[18220], 00:11:31.802 | 70.00th=[21103], 80.00th=[22152], 90.00th=[26346], 95.00th=[35914], 00:11:31.802 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[57934], 00:11:31.802 | 99.99th=[62653] 00:11:31.802 bw ( KiB/s): min=11848, max=16824, per=22.34%, avg=14336.00, stdev=3518.56, samples=2 00:11:31.802 iops : min= 2962, max= 4206, avg=3584.00, stdev=879.64, samples=2 00:11:31.802 lat (usec) : 1000=0.01% 00:11:31.802 lat (msec) : 4=0.13%, 10=10.75%, 20=53.99%, 50=34.13%, 100=0.98% 00:11:31.802 cpu : usr=3.88%, sys=4.18%, ctx=242, majf=0, minf=1 00:11:31.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:31.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.802 issued rwts: total=3336,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.802 job2: (groupid=0, jobs=1): err= 0: pid=2274525: Tue Jul 23 17:59:39 2024 00:11:31.802 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:31.802 slat (usec): min=2, max=10526, avg=142.15, stdev=851.68 00:11:31.802 clat (usec): min=3509, max=42847, avg=18572.76, stdev=7236.42 00:11:31.802 lat (usec): min=3515, max=42962, avg=18714.91, stdev=7308.16 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 3884], 5.00th=[10159], 10.00th=[11731], 20.00th=[12387], 00:11:31.802 | 30.00th=[13173], 40.00th=[14353], 50.00th=[17171], 60.00th=[18482], 00:11:31.802 | 70.00th=[22676], 80.00th=[26346], 90.00th=[29754], 95.00th=[31589], 00:11:31.802 | 99.00th=[37487], 99.50th=[37487], 99.90th=[39584], 99.95th=[40633], 00:11:31.802 | 99.99th=[42730] 00:11:31.802 write: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:31.802 slat (usec): min=4, max=8260, avg=126.38, stdev=668.13 00:11:31.802 clat (usec): min=359, max=36444, avg=16777.56, stdev=5817.34 00:11:31.802 lat (usec): min=3429, max=36461, avg=16903.94, stdev=5869.44 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 6915], 5.00th=[10421], 10.00th=[11469], 20.00th=[12649], 00:11:31.802 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14746], 60.00th=[16450], 00:11:31.802 | 70.00th=[17957], 80.00th=[19530], 90.00th=[27132], 95.00th=[29230], 00:11:31.802 | 99.00th=[33424], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:11:31.802 | 99.99th=[36439] 00:11:31.802 bw ( KiB/s): min=12288, max=16384, per=22.34%, avg=14336.00, stdev=2896.31, samples=2 00:11:31.802 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:31.802 lat (usec) : 500=0.01% 00:11:31.802 lat (msec) : 4=0.59%, 10=3.37%, 20=70.06%, 50=25.97% 00:11:31.802 cpu : usr=6.29%, sys=6.29%, ctx=317, majf=0, minf=1 00:11:31.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:31.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.802 issued rwts: total=3584,3590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.802 job3: (groupid=0, jobs=1): err= 0: pid=2274526: Tue Jul 23 17:59:39 2024 00:11:31.802 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:31.802 slat (usec): min=2, max=14383, avg=121.01, stdev=674.95 00:11:31.802 clat (usec): min=7345, max=59689, avg=14750.31, stdev=4510.88 00:11:31.802 lat (usec): min=7353, max=59696, avg=14871.32, stdev=4567.65 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 7832], 5.00th=[11207], 10.00th=[11994], 20.00th=[12518], 00:11:31.802 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13698], 60.00th=[14353], 00:11:31.802 | 70.00th=[14746], 80.00th=[15533], 90.00th=[17433], 95.00th=[21890], 00:11:31.802 | 99.00th=[29230], 99.50th=[50070], 99.90th=[55837], 99.95th=[59507], 00:11:31.802 | 99.99th=[59507] 00:11:31.802 write: IOPS=4471, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1004msec); 0 zone resets 00:11:31.802 slat (usec): min=3, max=12577, avg=105.02, stdev=554.33 00:11:31.802 clat (usec): min=377, max=66668, avg=14801.62, stdev=7091.44 00:11:31.802 lat (usec): min=1945, max=66675, avg=14906.65, stdev=7098.65 00:11:31.802 clat percentiles (usec): 00:11:31.802 | 1.00th=[ 4555], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:11:31.802 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091], 00:11:31.802 | 70.00th=[14484], 80.00th=[15008], 90.00th=[16319], 95.00th=[20841], 00:11:31.802 | 99.00th=[57410], 99.50th=[62129], 99.90th=[62653], 99.95th=[66847], 00:11:31.802 | 99.99th=[66847] 00:11:31.802 bw ( KiB/s): min=15288, max=19560, per=27.15%, avg=17424.00, stdev=3020.76, samples=2 00:11:31.802 iops : min= 3822, max= 4890, avg=4356.00, stdev=755.19, samples=2 00:11:31.802 lat (usec) : 500=0.01% 00:11:31.802 lat (msec) : 2=0.20%, 10=2.04%, 20=91.23%, 50=5.11%, 100=1.41% 00:11:31.802 cpu : usr=3.79%, sys=7.08%, ctx=452, majf=0, minf=1 00:11:31.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:31.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.803 issued rwts: total=4096,4489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.803 00:11:31.803 Run status group 0 (all jobs): 00:11:31.803 READ: bw=58.8MiB/s (61.7MB/s), 13.0MiB/s-17.7MiB/s (13.6MB/s-18.5MB/s), io=61.5MiB (64.5MB), run=1003-1046msec 00:11:31.803 WRITE: bw=62.7MiB/s (65.7MB/s), 13.9MiB/s-19.1MiB/s (14.6MB/s-20.0MB/s), io=65.6MiB (68.7MB), run=1003-1046msec 00:11:31.803 00:11:31.803 Disk stats (read/write): 00:11:31.803 nvme0n1: ios=4271/4608, merge=0/0, ticks=48556/53093, in_queue=101649, util=85.57% 00:11:31.803 nvme0n2: ios=2610/3047, merge=0/0, ticks=23624/25463, in_queue=49087, util=89.44% 00:11:31.803 nvme0n3: ios=2617/2968, merge=0/0, ticks=19618/18605, in_queue=38223, util=94.79% 00:11:31.803 nvme0n4: ios=3641/4069, merge=0/0, ticks=14007/14294, in_queue=28301, util=93.80% 00:11:31.803 17:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:31.803 [global] 00:11:31.803 thread=1 00:11:31.803 invalidate=1 00:11:31.803 rw=randwrite 00:11:31.803 time_based=1 00:11:31.803 runtime=1 00:11:31.803 ioengine=libaio 00:11:31.803 direct=1 00:11:31.803 bs=4096 00:11:31.803 iodepth=128 00:11:31.803 norandommap=0 00:11:31.803 numjobs=1 00:11:31.803 00:11:31.803 verify_dump=1 00:11:31.803 verify_backlog=512 00:11:31.803 verify_state_save=0 00:11:31.803 do_verify=1 00:11:31.803 verify=crc32c-intel 00:11:31.803 [job0] 00:11:31.803 filename=/dev/nvme0n1 00:11:31.803 [job1] 00:11:31.803 filename=/dev/nvme0n2 00:11:31.803 [job2] 00:11:31.803 filename=/dev/nvme0n3 00:11:31.803 [job3] 00:11:31.803 filename=/dev/nvme0n4 00:11:31.803 Could not set queue depth (nvme0n1) 00:11:31.803 Could not set queue depth (nvme0n2) 00:11:31.803 Could not set queue depth (nvme0n3) 00:11:31.803 Could not set queue depth (nvme0n4) 00:11:31.803 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.803 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.803 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.803 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.803 fio-3.35 00:11:31.803 Starting 4 threads 00:11:33.175 00:11:33.175 job0: (groupid=0, jobs=1): err= 0: pid=2274868: Tue Jul 23 17:59:40 2024 00:11:33.175 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:11:33.175 slat (usec): min=2, max=13734, avg=103.40, stdev=661.55 00:11:33.175 clat (usec): min=5209, max=53110, avg=12680.58, stdev=5452.52 00:11:33.175 lat (usec): min=5229, max=53117, avg=12783.98, stdev=5506.91 00:11:33.175 clat percentiles (usec): 00:11:33.175 | 1.00th=[ 5997], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9765], 00:11:33.175 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:11:33.175 | 70.00th=[12387], 80.00th=[13960], 90.00th=[17695], 95.00th=[22676], 00:11:33.175 | 99.00th=[37487], 99.50th=[46924], 99.90th=[53216], 99.95th=[53216], 00:11:33.175 | 99.99th=[53216] 00:11:33.175 write: IOPS=4733, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1012msec); 0 zone resets 00:11:33.175 slat (usec): min=3, max=8826, avg=97.80, stdev=485.02 00:11:33.175 clat (usec): min=1446, max=53110, avg=14519.03, stdev=7598.75 00:11:33.175 lat (usec): min=1459, max=53117, avg=14616.83, stdev=7651.08 00:11:33.175 clat percentiles (usec): 00:11:33.175 | 1.00th=[ 4080], 5.00th=[ 5735], 10.00th=[ 7767], 20.00th=[ 9241], 00:11:33.175 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:11:33.175 | 70.00th=[18482], 80.00th=[20579], 90.00th=[27657], 95.00th=[30540], 00:11:33.175 | 99.00th=[34341], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:11:33.175 | 99.99th=[53216] 00:11:33.175 bw ( KiB/s): min=12736, max=24560, per=27.98%, avg=18648.00, stdev=8360.83, samples=2 00:11:33.175 iops : min= 3184, max= 6140, avg=4662.00, stdev=2090.21, samples=2 00:11:33.175 lat (msec) : 2=0.06%, 4=0.36%, 10=27.25%, 20=55.95%, 50=16.22% 00:11:33.175 lat (msec) : 100=0.16% 00:11:33.175 cpu : usr=7.02%, sys=10.58%, ctx=479, majf=0, minf=1 00:11:33.175 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:33.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.175 issued rwts: total=4608,4790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.175 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.175 job1: (groupid=0, jobs=1): err= 0: pid=2274871: Tue Jul 23 17:59:40 2024 00:11:33.175 read: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1005msec) 00:11:33.175 slat (usec): min=3, max=10060, avg=145.74, stdev=749.73 00:11:33.175 clat (usec): min=3063, max=49517, avg=18807.43, stdev=7662.82 00:11:33.175 lat (usec): min=6177, max=49554, avg=18953.17, stdev=7705.00 00:11:33.175 clat percentiles (usec): 00:11:33.175 | 1.00th=[ 9372], 5.00th=[13042], 10.00th=[13435], 20.00th=[14091], 00:11:33.176 | 30.00th=[14877], 40.00th=[15270], 50.00th=[16057], 60.00th=[16909], 00:11:33.176 | 70.00th=[18744], 80.00th=[22676], 90.00th=[25297], 95.00th=[39060], 00:11:33.176 | 99.00th=[46924], 99.50th=[46924], 99.90th=[47973], 99.95th=[49021], 00:11:33.176 | 99.99th=[49546] 00:11:33.176 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:33.176 slat (usec): min=3, max=14244, avg=139.89, stdev=820.69 00:11:33.176 clat (usec): min=10022, max=56788, avg=18812.43, stdev=9079.78 00:11:33.176 lat (usec): min=10031, max=56801, avg=18952.32, stdev=9135.25 00:11:33.176 clat percentiles (usec): 00:11:33.176 | 1.00th=[10814], 5.00th=[11863], 10.00th=[11994], 20.00th=[13304], 00:11:33.176 | 30.00th=[13960], 40.00th=[14353], 50.00th=[15008], 60.00th=[16450], 00:11:33.176 | 70.00th=[19530], 80.00th=[21627], 90.00th=[31589], 95.00th=[41157], 00:11:33.176 | 99.00th=[56361], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:11:33.176 | 99.99th=[56886] 00:11:33.176 bw ( KiB/s): min=11944, max=16384, per=21.25%, avg=14164.00, stdev=3139.55, samples=2 00:11:33.176 iops : min= 2986, max= 4096, avg=3541.00, stdev=784.89, samples=2 00:11:33.176 lat (msec) : 4=0.01%, 10=0.62%, 20=70.97%, 50=27.33%, 100=1.07% 00:11:33.176 cpu : usr=6.08%, sys=6.27%, ctx=302, majf=0, minf=1 00:11:33.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:33.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.176 issued rwts: total=3157,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.176 job2: (groupid=0, jobs=1): err= 0: pid=2274872: Tue Jul 23 17:59:40 2024 00:11:33.176 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:33.176 slat (usec): min=3, max=5676, avg=133.95, stdev=608.41 00:11:33.176 clat (usec): min=10125, max=34388, avg=17313.64, stdev=4693.83 00:11:33.176 lat (usec): min=10861, max=34397, avg=17447.59, stdev=4705.87 00:11:33.176 clat percentiles (usec): 00:11:33.176 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12911], 20.00th=[14091], 00:11:33.176 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:11:33.176 | 70.00th=[19268], 80.00th=[22676], 90.00th=[24773], 95.00th=[25035], 00:11:33.176 | 99.00th=[31327], 99.50th=[31589], 99.90th=[34341], 99.95th=[34341], 00:11:33.176 | 99.99th=[34341] 00:11:33.176 write: IOPS=3868, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1003msec); 0 zone resets 00:11:33.176 slat (usec): min=3, max=11039, avg=123.34, stdev=646.80 00:11:33.176 clat (usec): min=415, max=37416, avg=16794.96, stdev=7127.09 00:11:33.176 lat (usec): min=933, max=37437, avg=16918.31, stdev=7148.68 00:11:33.176 clat percentiles (usec): 00:11:33.176 | 1.00th=[ 6980], 5.00th=[10552], 10.00th=[11994], 20.00th=[12518], 00:11:33.176 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14353], 60.00th=[15401], 00:11:33.176 | 70.00th=[15664], 80.00th=[18744], 90.00th=[31327], 95.00th=[32637], 00:11:33.176 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:11:33.176 | 99.99th=[37487] 00:11:33.176 bw ( KiB/s): min=14912, max=15104, per=22.52%, avg=15008.00, stdev=135.76, samples=2 00:11:33.176 iops : min= 3728, max= 3776, avg=3752.00, stdev=33.94, samples=2 00:11:33.176 lat (usec) : 500=0.01%, 1000=0.05% 00:11:33.176 lat (msec) : 4=0.43%, 10=1.66%, 20=74.00%, 50=23.85% 00:11:33.176 cpu : usr=5.49%, sys=8.28%, ctx=347, majf=0, minf=1 00:11:33.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:33.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.176 issued rwts: total=3584,3880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.176 job3: (groupid=0, jobs=1): err= 0: pid=2274873: Tue Jul 23 17:59:40 2024 00:11:33.176 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:11:33.176 slat (usec): min=2, max=8698, avg=107.66, stdev=583.50 00:11:33.176 clat (usec): min=4768, max=23231, avg=13968.45, stdev=2441.26 00:11:33.176 lat (usec): min=4775, max=23270, avg=14076.11, stdev=2469.12 00:11:33.176 clat percentiles (usec): 00:11:33.176 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11731], 20.00th=[12256], 00:11:33.176 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13960], 00:11:33.176 | 70.00th=[14877], 80.00th=[15795], 90.00th=[18482], 95.00th=[18744], 00:11:33.176 | 99.00th=[20055], 99.50th=[21365], 99.90th=[21890], 99.95th=[22152], 00:11:33.176 | 99.99th=[23200] 00:11:33.176 write: IOPS=4551, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1012msec); 0 zone resets 00:11:33.176 slat (usec): min=3, max=18122, avg=111.12, stdev=614.55 00:11:33.176 clat (usec): min=3328, max=31535, avg=15325.63, stdev=5528.64 00:11:33.176 lat (usec): min=3336, max=31569, avg=15436.76, stdev=5571.73 00:11:33.176 clat percentiles (usec): 00:11:33.176 | 1.00th=[ 5211], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[12125], 00:11:33.176 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[14615], 00:11:33.176 | 70.00th=[15270], 80.00th=[18482], 90.00th=[23987], 95.00th=[28705], 00:11:33.176 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:11:33.176 | 99.99th=[31589] 00:11:33.176 bw ( KiB/s): min=17856, max=17976, per=26.88%, avg=17916.00, stdev=84.85, samples=2 00:11:33.176 iops : min= 4464, max= 4494, avg=4479.00, stdev=21.21, samples=2 00:11:33.176 lat (msec) : 4=0.33%, 10=4.00%, 20=85.41%, 50=10.26% 00:11:33.176 cpu : usr=5.44%, sys=9.10%, ctx=489, majf=0, minf=1 00:11:33.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:33.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.176 issued rwts: total=4096,4606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.176 00:11:33.176 Run status group 0 (all jobs): 00:11:33.176 READ: bw=59.6MiB/s (62.5MB/s), 12.3MiB/s-17.8MiB/s (12.9MB/s-18.7MB/s), io=60.3MiB (63.3MB), run=1003-1012msec 00:11:33.176 WRITE: bw=65.1MiB/s (68.2MB/s), 13.9MiB/s-18.5MiB/s (14.6MB/s-19.4MB/s), io=65.9MiB (69.1MB), run=1003-1012msec 00:11:33.176 00:11:33.176 Disk stats (read/write): 00:11:33.176 nvme0n1: ios=4146/4311, merge=0/0, ticks=45751/54822, in_queue=100573, util=90.98% 00:11:33.176 nvme0n2: ios=2777/3072, merge=0/0, ticks=17258/15807, in_queue=33065, util=96.14% 00:11:33.176 nvme0n3: ios=3066/3072, merge=0/0, ticks=13660/15229, in_queue=28889, util=94.89% 00:11:33.176 nvme0n4: ios=3507/3584, merge=0/0, ticks=17625/17585, in_queue=35210, util=95.79% 00:11:33.176 17:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:33.176 17:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2275019 00:11:33.176 17:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:33.176 17:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:33.176 [global] 00:11:33.176 thread=1 00:11:33.176 invalidate=1 00:11:33.176 rw=read 00:11:33.176 time_based=1 00:11:33.176 runtime=10 00:11:33.176 ioengine=libaio 00:11:33.176 direct=1 00:11:33.176 bs=4096 00:11:33.176 iodepth=1 00:11:33.176 norandommap=1 00:11:33.176 numjobs=1 00:11:33.176 00:11:33.176 [job0] 00:11:33.176 filename=/dev/nvme0n1 00:11:33.176 [job1] 00:11:33.176 filename=/dev/nvme0n2 00:11:33.176 [job2] 00:11:33.176 filename=/dev/nvme0n3 00:11:33.176 [job3] 00:11:33.176 filename=/dev/nvme0n4 00:11:33.176 Could not set queue depth (nvme0n1) 00:11:33.176 Could not set queue depth (nvme0n2) 00:11:33.176 Could not set queue depth (nvme0n3) 00:11:33.176 Could not set queue depth (nvme0n4) 00:11:33.432 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.432 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.432 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.432 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.432 fio-3.35 00:11:33.432 Starting 4 threads 00:11:36.708 17:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:36.708 17:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:36.708 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3608576, buflen=4096 00:11:36.708 fio: pid=2275111, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:36.708 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.708 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:36.708 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24207360, buflen=4096 00:11:36.708 fio: pid=2275110, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:36.966 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.966 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:36.966 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=49401856, buflen=4096 00:11:36.966 fio: pid=2275108, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:37.224 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.224 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:37.224 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=9658368, buflen=4096 00:11:37.224 fio: pid=2275109, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:37.224 00:11:37.224 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2275108: Tue Jul 23 17:59:44 2024 00:11:37.224 read: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(47.1MiB/3480msec) 00:11:37.224 slat (usec): min=4, max=14694, avg=14.81, stdev=165.89 00:11:37.224 clat (usec): min=173, max=41997, avg=268.53, stdev=926.75 00:11:37.224 lat (usec): min=178, max=42011, avg=283.34, stdev=942.18 00:11:37.224 clat percentiles (usec): 00:11:37.224 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:11:37.224 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 243], 00:11:37.224 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 379], 00:11:37.224 | 99.00th=[ 457], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 660], 00:11:37.224 | 99.99th=[42206] 00:11:37.224 bw ( KiB/s): min= 9864, max=18344, per=64.19%, avg=14680.00, stdev=3196.19, samples=6 00:11:37.224 iops : min= 2466, max= 4586, avg=3670.00, stdev=799.05, samples=6 00:11:37.224 lat (usec) : 250=64.56%, 500=35.02%, 750=0.36% 00:11:37.224 lat (msec) : 50=0.05% 00:11:37.224 cpu : usr=1.78%, sys=5.15%, ctx=12068, majf=0, minf=1 00:11:37.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 issued rwts: total=12062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.224 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2275109: Tue Jul 23 17:59:44 2024 00:11:37.224 read: IOPS=635, BW=2542KiB/s (2603kB/s)(9432KiB/3710msec) 00:11:37.224 slat (usec): min=4, max=25873, avg=45.52, stdev=684.03 00:11:37.224 clat (usec): min=172, max=42301, avg=1523.40, stdev=6832.63 00:11:37.224 lat (usec): min=185, max=55995, avg=1566.03, stdev=6917.78 00:11:37.224 clat percentiles (usec): 00:11:37.224 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 245], 00:11:37.224 | 30.00th=[ 265], 40.00th=[ 293], 50.00th=[ 334], 60.00th=[ 383], 00:11:37.224 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 506], 95.00th=[ 570], 00:11:37.224 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:37.224 | 99.99th=[42206] 00:11:37.224 bw ( KiB/s): min= 96, max= 7744, per=9.82%, avg=2245.14, stdev=2890.95, samples=7 00:11:37.224 iops : min= 24, max= 1936, avg=561.29, stdev=722.74, samples=7 00:11:37.224 lat (usec) : 250=22.21%, 500=66.26%, 750=8.39%, 1000=0.21% 00:11:37.224 lat (msec) : 50=2.88% 00:11:37.224 cpu : usr=0.43%, sys=1.51%, ctx=2364, majf=0, minf=1 00:11:37.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.224 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2275110: Tue Jul 23 17:59:44 2024 00:11:37.224 read: IOPS=1856, BW=7425KiB/s (7603kB/s)(23.1MiB/3184msec) 00:11:37.224 slat (nsec): min=4720, max=71939, avg=15733.03, stdev=7875.64 00:11:37.224 clat (usec): min=194, max=42194, avg=515.60, stdev=3097.10 00:11:37.224 lat (usec): min=199, max=42203, avg=531.34, stdev=3097.42 00:11:37.224 clat percentiles (usec): 00:11:37.224 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:11:37.224 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:11:37.224 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 375], 95.00th=[ 465], 00:11:37.224 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:37.224 | 99.99th=[42206] 00:11:37.224 bw ( KiB/s): min= 96, max=14392, per=33.63%, avg=7690.67, stdev=5862.90, samples=6 00:11:37.224 iops : min= 24, max= 3598, avg=1922.67, stdev=1465.73, samples=6 00:11:37.224 lat (usec) : 250=36.05%, 500=61.22%, 750=2.10%, 1000=0.02% 00:11:37.224 lat (msec) : 4=0.03%, 50=0.56% 00:11:37.224 cpu : usr=1.29%, sys=3.33%, ctx=5913, majf=0, minf=1 00:11:37.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.224 issued rwts: total=5911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.224 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2275111: Tue Jul 23 17:59:44 2024 00:11:37.224 read: IOPS=302, BW=1211KiB/s (1240kB/s)(3524KiB/2911msec) 00:11:37.224 slat (nsec): min=6478, max=54858, avg=27117.79, stdev=9998.99 00:11:37.224 clat (usec): min=243, max=41262, avg=3245.22, stdev=10386.64 00:11:37.224 lat (usec): min=264, max=41296, avg=3272.35, stdev=10386.04 00:11:37.224 clat percentiles (usec): 00:11:37.224 | 1.00th=[ 251], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 338], 00:11:37.224 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 412], 00:11:37.224 | 70.00th=[ 433], 80.00th=[ 465], 90.00th=[ 523], 95.00th=[41157], 00:11:37.224 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:37.224 | 99.99th=[41157] 00:11:37.224 bw ( KiB/s): min= 96, max= 6576, per=6.09%, avg=1393.60, stdev=2897.05, samples=5 00:11:37.224 iops : min= 24, max= 1644, avg=348.40, stdev=724.26, samples=5 00:11:37.224 lat (usec) : 250=0.79%, 500=87.41%, 750=4.54%, 1000=0.11% 00:11:37.224 lat (msec) : 50=7.03% 00:11:37.224 cpu : usr=0.27%, sys=1.00%, ctx=882, majf=0, minf=1 00:11:37.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.225 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.225 issued rwts: total=882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.225 00:11:37.225 Run status group 0 (all jobs): 00:11:37.225 READ: bw=22.3MiB/s (23.4MB/s), 1211KiB/s-13.5MiB/s (1240kB/s-14.2MB/s), io=82.9MiB (86.9MB), run=2911-3710msec 00:11:37.225 00:11:37.225 Disk stats (read/write): 00:11:37.225 nvme0n1: ios=12103/0, merge=0/0, ticks=3678/0, in_queue=3678, util=99.23% 00:11:37.225 nvme0n2: ios=2072/0, merge=0/0, ticks=3463/0, in_queue=3463, util=94.93% 00:11:37.225 nvme0n3: ios=5881/0, merge=0/0, ticks=2885/0, in_queue=2885, util=96.75% 00:11:37.225 nvme0n4: ios=879/0, merge=0/0, ticks=2762/0, in_queue=2762, util=96.71% 00:11:37.483 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.483 17:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:37.740 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.740 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:37.998 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.998 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:38.256 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.256 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:38.547 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:38.547 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2275019 00:11:38.547 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:38.547 17:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:38.547 nvmf hotplug test: fio failed as expected 00:11:38.547 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.804 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.804 rmmod nvme_tcp 00:11:38.804 rmmod nvme_fabrics 00:11:38.804 rmmod nvme_keyring 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2272983 ']' 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2272983 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2272983 ']' 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2272983 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.805 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272983 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272983' 00:11:39.064 killing process with pid 2272983 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2272983 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2272983 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.064 17:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.598 00:11:41.598 real 0m23.386s 00:11:41.598 user 1m19.672s 00:11:41.598 sys 0m7.823s 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.598 ************************************ 00:11:41.598 END TEST nvmf_fio_target 00:11:41.598 ************************************ 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:41.598 ************************************ 00:11:41.598 START TEST nvmf_bdevio 00:11:41.598 ************************************ 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:41.598 * Looking for test storage... 00:11:41.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.598 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.599 17:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.503 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.503 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.504 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:11:43.504 00:11:43.504 --- 10.0.0.2 ping statistics --- 00:11:43.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.504 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:11:43.504 00:11:43.504 --- 10.0.0.1 ping statistics --- 00:11:43.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.504 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.504 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2277734 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2277734 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2277734 ']' 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.762 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.762 [2024-07-23 17:59:51.226198] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:11:43.762 [2024-07-23 17:59:51.226282] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.762 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.762 [2024-07-23 17:59:51.293138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.762 [2024-07-23 17:59:51.384652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.762 [2024-07-23 17:59:51.384714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.762 [2024-07-23 17:59:51.384728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.762 [2024-07-23 17:59:51.384739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.762 [2024-07-23 17:59:51.384749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.762 [2024-07-23 17:59:51.384836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:43.762 [2024-07-23 17:59:51.384881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:43.762 [2024-07-23 17:59:51.384942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:43.762 [2024-07-23 17:59:51.384944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 [2024-07-23 17:59:51.548792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 Malloc0 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.020 [2024-07-23 17:59:51.601206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:44.020 { 00:11:44.020 "params": { 00:11:44.020 "name": "Nvme$subsystem", 00:11:44.020 "trtype": "$TEST_TRANSPORT", 00:11:44.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:44.020 "adrfam": "ipv4", 00:11:44.020 "trsvcid": "$NVMF_PORT", 00:11:44.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:44.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:44.020 "hdgst": ${hdgst:-false}, 00:11:44.020 "ddgst": ${ddgst:-false} 00:11:44.020 }, 00:11:44.020 "method": "bdev_nvme_attach_controller" 00:11:44.020 } 00:11:44.020 EOF 00:11:44.020 )") 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:44.020 17:59:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:44.020 "params": { 00:11:44.020 "name": "Nvme1", 00:11:44.020 "trtype": "tcp", 00:11:44.020 "traddr": "10.0.0.2", 00:11:44.020 "adrfam": "ipv4", 00:11:44.020 "trsvcid": "4420", 00:11:44.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:44.020 "hdgst": false, 00:11:44.020 "ddgst": false 00:11:44.020 }, 00:11:44.020 "method": "bdev_nvme_attach_controller" 00:11:44.020 }' 00:11:44.020 [2024-07-23 17:59:51.649254] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:11:44.020 [2024-07-23 17:59:51.649346] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277758 ] 00:11:44.020 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.277 [2024-07-23 17:59:51.710884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.277 [2024-07-23 17:59:51.803108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.277 [2024-07-23 17:59:51.803158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.277 [2024-07-23 17:59:51.803160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.534 I/O targets: 00:11:44.534 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:44.534 00:11:44.534 00:11:44.534 CUnit - A unit testing framework for C - Version 2.1-3 00:11:44.534 http://cunit.sourceforge.net/ 00:11:44.534 00:11:44.534 00:11:44.534 Suite: bdevio tests on: Nvme1n1 00:11:44.791 Test: blockdev write read block ...passed 00:11:44.792 Test: blockdev write zeroes read block ...passed 00:11:44.792 Test: blockdev write zeroes read no split ...passed 00:11:44.792 Test: blockdev write zeroes read split ...passed 00:11:44.792 Test: blockdev write zeroes read split partial ...passed 00:11:44.792 Test: blockdev reset ...[2024-07-23 17:59:52.260652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:44.792 [2024-07-23 17:59:52.260767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2291c90 (9): Bad file descriptor 00:11:44.792 [2024-07-23 17:59:52.313478] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:44.792 passed 00:11:44.792 Test: blockdev write read 8 blocks ...passed 00:11:44.792 Test: blockdev write read size > 128k ...passed 00:11:44.792 Test: blockdev write read invalid size ...passed 00:11:44.792 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.792 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.792 Test: blockdev write read max offset ...passed 00:11:44.792 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.792 Test: blockdev writev readv 8 blocks ...passed 00:11:44.792 Test: blockdev writev readv 30 x 1block ...passed 00:11:45.049 Test: blockdev writev readv block ...passed 00:11:45.049 Test: blockdev writev readv size > 128k ...passed 00:11:45.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:45.049 Test: blockdev comparev and writev ...[2024-07-23 17:59:52.487476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.487512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.487536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.487552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.487898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.487922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.487943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.487960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.488291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.488324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.488349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.488365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.488694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.488719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:45.049 [2024-07-23 17:59:52.488741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:45.049 [2024-07-23 17:59:52.488757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:45.049 passed 00:11:45.049 Test: blockdev nvme passthru rw ...passed 00:11:45.049 Test: blockdev nvme passthru vendor specific ...[2024-07-23 17:59:52.572601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.049 [2024-07-23 17:59:52.572628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:45.050 [2024-07-23 17:59:52.572776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.050 [2024-07-23 17:59:52.572799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:45.050 [2024-07-23 17:59:52.572940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.050 [2024-07-23 17:59:52.572963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:45.050 [2024-07-23 17:59:52.573111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:45.050 [2024-07-23 17:59:52.573134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:45.050 passed 00:11:45.050 Test: blockdev nvme admin passthru ...passed 00:11:45.050 Test: blockdev copy ...passed 00:11:45.050 00:11:45.050 Run Summary: Type Total Ran Passed Failed Inactive 00:11:45.050 suites 1 1 n/a 0 0 00:11:45.050 tests 23 23 23 0 0 00:11:45.050 asserts 152 152 152 0 n/a 00:11:45.050 00:11:45.050 Elapsed time = 0.975 seconds 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.307 rmmod nvme_tcp 00:11:45.307 rmmod nvme_fabrics 00:11:45.307 rmmod nvme_keyring 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2277734 ']' 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2277734 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2277734 ']' 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2277734 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2277734 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2277734' 00:11:45.307 killing process with pid 2277734 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2277734 00:11:45.307 17:59:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2277734 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.565 17:59:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.095 00:11:48.095 real 0m6.432s 00:11:48.095 user 0m10.209s 00:11:48.095 sys 0m2.195s 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.095 ************************************ 00:11:48.095 END TEST nvmf_bdevio 00:11:48.095 ************************************ 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:48.095 00:11:48.095 real 3m50.030s 00:11:48.095 user 9m47.537s 00:11:48.095 sys 1m9.865s 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.095 ************************************ 00:11:48.095 END TEST nvmf_target_core 00:11:48.095 ************************************ 00:11:48.095 17:59:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.095 17:59:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:48.095 17:59:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.095 17:59:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.095 17:59:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.095 ************************************ 00:11:48.095 START TEST nvmf_target_extra 00:11:48.095 ************************************ 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:48.095 * Looking for test storage... 00:11:48.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.095 ************************************ 00:11:48.095 START TEST nvmf_example 00:11:48.095 ************************************ 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:48.095 * Looking for test storage... 00:11:48.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:48.095 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.096 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:49.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:49.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.998 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:49.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:49.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.999 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:11:50.257 00:11:50.257 --- 10.0.0.2 ping statistics --- 00:11:50.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.257 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:11:50.257 00:11:50.257 --- 10.0.0.1 ping statistics --- 00:11:50.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.257 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2279995 00:11:50.257 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2279995 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2279995 ']' 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.258 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.258 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.192 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.448 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:51.449 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:51.449 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.402 Initializing NVMe Controllers 00:12:01.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:01.402 Initialization complete. Launching workers. 00:12:01.402 ======================================================== 00:12:01.403 Latency(us) 00:12:01.403 Device Information : IOPS MiB/s Average min max 00:12:01.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15055.19 58.81 4250.71 763.35 16122.51 00:12:01.403 ======================================================== 00:12:01.403 Total : 15055.19 58.81 4250.71 763.35 16122.51 00:12:01.403 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.403 rmmod nvme_tcp 00:12:01.403 rmmod nvme_fabrics 00:12:01.403 rmmod nvme_keyring 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2279995 ']' 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2279995 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2279995 ']' 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2279995 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.403 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2279995 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2279995' 00:12:01.660 killing process with pid 2279995 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2279995 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2279995 00:12:01.660 nvmf threads initialize successfully 00:12:01.660 bdev subsystem init successfully 00:12:01.660 created a nvmf target service 00:12:01.660 create targets's poll groups done 00:12:01.660 all subsystems of target started 00:12:01.660 nvmf target is running 00:12:01.660 all subsystems of target stopped 00:12:01.660 destroy targets's poll groups done 00:12:01.660 destroyed the nvmf target service 00:12:01.660 bdev subsystem finish successfully 00:12:01.660 nvmf threads destroy successfully 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.660 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.191 00:12:04.191 real 0m15.950s 00:12:04.191 user 0m44.940s 00:12:04.191 sys 0m3.364s 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:04.191 ************************************ 00:12:04.191 END TEST nvmf_example 00:12:04.191 ************************************ 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.191 ************************************ 00:12:04.191 START TEST nvmf_filesystem 00:12:04.191 ************************************ 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:04.191 * Looking for test storage... 00:12:04.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:04.191 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:04.192 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:04.192 #define SPDK_CONFIG_H 00:12:04.192 #define SPDK_CONFIG_APPS 1 00:12:04.192 #define SPDK_CONFIG_ARCH native 00:12:04.192 #undef SPDK_CONFIG_ASAN 00:12:04.192 #undef SPDK_CONFIG_AVAHI 00:12:04.192 #undef SPDK_CONFIG_CET 00:12:04.192 #define SPDK_CONFIG_COVERAGE 1 00:12:04.192 #define SPDK_CONFIG_CROSS_PREFIX 00:12:04.192 #undef SPDK_CONFIG_CRYPTO 00:12:04.192 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:04.192 #undef SPDK_CONFIG_CUSTOMOCF 00:12:04.192 #undef SPDK_CONFIG_DAOS 00:12:04.192 #define SPDK_CONFIG_DAOS_DIR 00:12:04.192 #define SPDK_CONFIG_DEBUG 1 00:12:04.192 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:04.192 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:04.192 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:04.192 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:04.192 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:04.192 #undef SPDK_CONFIG_DPDK_UADK 00:12:04.192 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:04.192 #define SPDK_CONFIG_EXAMPLES 1 00:12:04.192 #undef SPDK_CONFIG_FC 00:12:04.192 #define SPDK_CONFIG_FC_PATH 00:12:04.192 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:04.192 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:04.192 #undef SPDK_CONFIG_FUSE 00:12:04.192 #undef SPDK_CONFIG_FUZZER 00:12:04.192 #define SPDK_CONFIG_FUZZER_LIB 00:12:04.192 #undef SPDK_CONFIG_GOLANG 00:12:04.192 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:04.192 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:04.192 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:04.192 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:04.192 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:04.192 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:04.192 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:04.192 #define SPDK_CONFIG_IDXD 1 00:12:04.192 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:04.192 #undef SPDK_CONFIG_IPSEC_MB 00:12:04.192 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:04.192 #define SPDK_CONFIG_ISAL 1 00:12:04.192 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:04.192 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:04.192 #define SPDK_CONFIG_LIBDIR 00:12:04.192 #undef SPDK_CONFIG_LTO 00:12:04.192 #define SPDK_CONFIG_MAX_LCORES 128 00:12:04.192 #define SPDK_CONFIG_NVME_CUSE 1 00:12:04.192 #undef SPDK_CONFIG_OCF 00:12:04.192 #define SPDK_CONFIG_OCF_PATH 00:12:04.192 #define SPDK_CONFIG_OPENSSL_PATH 00:12:04.192 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:04.192 #define SPDK_CONFIG_PGO_DIR 00:12:04.192 #undef SPDK_CONFIG_PGO_USE 00:12:04.192 #define SPDK_CONFIG_PREFIX /usr/local 00:12:04.192 #undef SPDK_CONFIG_RAID5F 00:12:04.192 #undef SPDK_CONFIG_RBD 00:12:04.192 #define SPDK_CONFIG_RDMA 1 00:12:04.192 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:04.192 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:04.192 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:04.192 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:04.192 #define SPDK_CONFIG_SHARED 1 00:12:04.193 #undef SPDK_CONFIG_SMA 00:12:04.193 #define SPDK_CONFIG_TESTS 1 00:12:04.193 #undef SPDK_CONFIG_TSAN 00:12:04.193 #define SPDK_CONFIG_UBLK 1 00:12:04.193 #define SPDK_CONFIG_UBSAN 1 00:12:04.193 #undef SPDK_CONFIG_UNIT_TESTS 00:12:04.193 #undef SPDK_CONFIG_URING 00:12:04.193 #define SPDK_CONFIG_URING_PATH 00:12:04.193 #undef SPDK_CONFIG_URING_ZNS 00:12:04.193 #undef SPDK_CONFIG_USDT 00:12:04.193 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:04.193 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:04.193 #define SPDK_CONFIG_VFIO_USER 1 00:12:04.193 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:04.193 #define SPDK_CONFIG_VHOST 1 00:12:04.193 #define SPDK_CONFIG_VIRTIO 1 00:12:04.193 #undef SPDK_CONFIG_VTUNE 00:12:04.193 #define SPDK_CONFIG_VTUNE_DIR 00:12:04.193 #define SPDK_CONFIG_WERROR 1 00:12:04.193 #define SPDK_CONFIG_WPDK_DIR 00:12:04.193 #undef SPDK_CONFIG_XNVME 00:12:04.193 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:04.193 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:04.194 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2282311 ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2282311 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.MDVSum 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MDVSum/tests/target /tmp/spdk.MDVSum 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:12:04.195 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53354012672 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994721280 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8640708608 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30987440128 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376535040 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22409216 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996918272 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=442368 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:12:04.196 * Looking for test storage... 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53354012672 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10855301120 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:04.196 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.197 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:06.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:06.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.119 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:06.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:06.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.120 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:12:06.383 00:12:06.383 --- 10.0.0.2 ping statistics --- 00:12:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.383 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:06.383 00:12:06.383 --- 10.0.0.1 ping statistics --- 00:12:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.383 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.383 ************************************ 00:12:06.383 START TEST nvmf_filesystem_no_in_capsule 00:12:06.383 ************************************ 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.383 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2283940 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2283940 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2283940 ']' 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.384 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.384 [2024-07-23 18:00:13.899389] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:12:06.384 [2024-07-23 18:00:13.899484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.384 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.384 [2024-07-23 18:00:13.964002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.642 [2024-07-23 18:00:14.049776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.642 [2024-07-23 18:00:14.049823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.642 [2024-07-23 18:00:14.049860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.642 [2024-07-23 18:00:14.049871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.642 [2024-07-23 18:00:14.049881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.642 [2024-07-23 18:00:14.049960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.642 [2024-07-23 18:00:14.050068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.642 [2024-07-23 18:00:14.050143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.642 [2024-07-23 18:00:14.050145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.642 [2024-07-23 18:00:14.202843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.642 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.900 Malloc1 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.900 [2024-07-23 18:00:14.390468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.900 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:06.900 { 00:12:06.900 "name": "Malloc1", 00:12:06.900 "aliases": [ 00:12:06.900 "6e6f7a42-e62e-4813-b56f-d4d22c5c6169" 00:12:06.900 ], 00:12:06.900 "product_name": "Malloc disk", 00:12:06.900 "block_size": 512, 00:12:06.900 "num_blocks": 1048576, 00:12:06.900 "uuid": "6e6f7a42-e62e-4813-b56f-d4d22c5c6169", 00:12:06.900 "assigned_rate_limits": { 00:12:06.900 "rw_ios_per_sec": 0, 00:12:06.900 "rw_mbytes_per_sec": 0, 00:12:06.900 "r_mbytes_per_sec": 0, 00:12:06.900 "w_mbytes_per_sec": 0 00:12:06.900 }, 00:12:06.900 "claimed": true, 00:12:06.900 "claim_type": "exclusive_write", 00:12:06.900 "zoned": false, 00:12:06.900 "supported_io_types": { 00:12:06.900 "read": true, 00:12:06.900 "write": true, 00:12:06.900 "unmap": true, 00:12:06.900 "flush": true, 00:12:06.900 "reset": true, 00:12:06.900 "nvme_admin": false, 00:12:06.900 "nvme_io": false, 00:12:06.900 "nvme_io_md": false, 00:12:06.900 "write_zeroes": true, 00:12:06.900 "zcopy": true, 00:12:06.900 "get_zone_info": false, 00:12:06.900 "zone_management": false, 00:12:06.900 "zone_append": false, 00:12:06.901 "compare": false, 00:12:06.901 "compare_and_write": false, 00:12:06.901 "abort": true, 00:12:06.901 "seek_hole": false, 00:12:06.901 "seek_data": false, 00:12:06.901 "copy": true, 00:12:06.901 "nvme_iov_md": false 00:12:06.901 }, 00:12:06.901 "memory_domains": [ 00:12:06.901 { 00:12:06.901 "dma_device_id": "system", 00:12:06.901 "dma_device_type": 1 00:12:06.901 }, 00:12:06.901 { 00:12:06.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.901 "dma_device_type": 2 00:12:06.901 } 00:12:06.901 ], 00:12:06.901 "driver_specific": {} 00:12:06.901 } 00:12:06.901 ]' 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:06.901 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.466 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.466 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:07.466 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.466 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:07.466 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:09.991 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:09.992 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:10.556 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:11.489 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:11.489 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:11.489 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:11.489 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.489 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.489 ************************************ 00:12:11.489 START TEST filesystem_ext4 00:12:11.489 ************************************ 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:12:11.489 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:11.489 mke2fs 1.46.5 (30-Dec-2021) 00:12:11.489 Discarding device blocks: 0/522240 done 00:12:11.489 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:11.489 Filesystem UUID: ccc522dc-0c5a-493f-824a-9bffded1e822 00:12:11.489 Superblock backups stored on blocks: 00:12:11.489 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:11.489 00:12:11.489 Allocating group tables: 0/64 done 00:12:11.489 Writing inode tables: 0/64 done 00:12:11.746 Creating journal (8192 blocks): done 00:12:12.311 Writing superblocks and filesystem accounting information: 0/64 done 00:12:12.311 00:12:12.311 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:12:12.311 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.569 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2283940 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.826 00:12:12.826 real 0m1.286s 00:12:12.826 user 0m0.017s 00:12:12.826 sys 0m0.052s 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.826 ************************************ 00:12:12.826 END TEST filesystem_ext4 00:12:12.826 ************************************ 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.826 ************************************ 00:12:12.826 START TEST filesystem_btrfs 00:12:12.826 ************************************ 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:12:12.826 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:13.083 btrfs-progs v6.6.2 00:12:13.083 See https://btrfs.readthedocs.io for more information. 00:12:13.083 00:12:13.083 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:13.083 NOTE: several default settings have changed in version 5.15, please make sure 00:12:13.083 this does not affect your deployments: 00:12:13.083 - DUP for metadata (-m dup) 00:12:13.083 - enabled no-holes (-O no-holes) 00:12:13.083 - enabled free-space-tree (-R free-space-tree) 00:12:13.083 00:12:13.083 Label: (null) 00:12:13.083 UUID: 876a92b6-c2d9-469f-a4fa-c1f44ab49d46 00:12:13.083 Node size: 16384 00:12:13.083 Sector size: 4096 00:12:13.083 Filesystem size: 510.00MiB 00:12:13.083 Block group profiles: 00:12:13.083 Data: single 8.00MiB 00:12:13.083 Metadata: DUP 32.00MiB 00:12:13.083 System: DUP 8.00MiB 00:12:13.083 SSD detected: yes 00:12:13.083 Zoned device: no 00:12:13.083 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:13.083 Runtime features: free-space-tree 00:12:13.083 Checksum: crc32c 00:12:13.083 Number of devices: 1 00:12:13.083 Devices: 00:12:13.083 ID SIZE PATH 00:12:13.083 1 510.00MiB /dev/nvme0n1p1 00:12:13.083 00:12:13.083 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:12:13.083 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2283940 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.016 00:12:14.016 real 0m1.158s 00:12:14.016 user 0m0.013s 00:12:14.016 sys 0m0.114s 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.016 ************************************ 00:12:14.016 END TEST filesystem_btrfs 00:12:14.016 ************************************ 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.016 ************************************ 00:12:14.016 START TEST filesystem_xfs 00:12:14.016 ************************************ 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:12:14.016 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.016 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.016 = sectsz=512 attr=2, projid32bit=1 00:12:14.016 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.016 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.016 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.016 = sunit=0 swidth=0 blks 00:12:14.016 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.016 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.016 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.016 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:14.948 Discarding blocks...Done. 00:12:14.948 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:12:14.948 18:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.471 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.471 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2283940 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.472 00:12:17.472 real 0m3.574s 00:12:17.472 user 0m0.016s 00:12:17.472 sys 0m0.064s 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.472 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 ************************************ 00:12:17.472 END TEST filesystem_xfs 00:12:17.472 ************************************ 00:12:17.729 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:17.729 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2283940 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2283940 ']' 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2283940 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2283940 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2283940' 00:12:17.987 killing process with pid 2283940 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2283940 00:12:17.987 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2283940 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:18.553 00:12:18.553 real 0m12.151s 00:12:18.553 user 0m46.709s 00:12:18.553 sys 0m1.794s 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.553 ************************************ 00:12:18.553 END TEST nvmf_filesystem_no_in_capsule 00:12:18.553 ************************************ 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.553 ************************************ 00:12:18.553 START TEST nvmf_filesystem_in_capsule 00:12:18.553 ************************************ 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2285613 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2285613 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2285613 ']' 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.553 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.553 [2024-07-23 18:00:26.109606] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:12:18.553 [2024-07-23 18:00:26.109713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.553 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.553 [2024-07-23 18:00:26.175736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.811 [2024-07-23 18:00:26.267499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.811 [2024-07-23 18:00:26.267562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.811 [2024-07-23 18:00:26.267575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.811 [2024-07-23 18:00:26.267587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.811 [2024-07-23 18:00:26.267596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.811 [2024-07-23 18:00:26.267667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.811 [2024-07-23 18:00:26.267729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.811 [2024-07-23 18:00:26.267797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.811 [2024-07-23 18:00:26.267799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.811 [2024-07-23 18:00:26.417762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.811 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 Malloc1 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 [2024-07-23 18:00:26.594224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:19.070 { 00:12:19.070 "name": "Malloc1", 00:12:19.070 "aliases": [ 00:12:19.070 "dea5ad5a-92bc-4b0e-a191-5aa0cbffdf22" 00:12:19.070 ], 00:12:19.070 "product_name": "Malloc disk", 00:12:19.070 "block_size": 512, 00:12:19.070 "num_blocks": 1048576, 00:12:19.070 "uuid": "dea5ad5a-92bc-4b0e-a191-5aa0cbffdf22", 00:12:19.070 "assigned_rate_limits": { 00:12:19.070 "rw_ios_per_sec": 0, 00:12:19.070 "rw_mbytes_per_sec": 0, 00:12:19.070 "r_mbytes_per_sec": 0, 00:12:19.070 "w_mbytes_per_sec": 0 00:12:19.070 }, 00:12:19.070 "claimed": true, 00:12:19.070 "claim_type": "exclusive_write", 00:12:19.070 "zoned": false, 00:12:19.070 "supported_io_types": { 00:12:19.070 "read": true, 00:12:19.070 "write": true, 00:12:19.070 "unmap": true, 00:12:19.070 "flush": true, 00:12:19.070 "reset": true, 00:12:19.070 "nvme_admin": false, 00:12:19.070 "nvme_io": false, 00:12:19.070 "nvme_io_md": false, 00:12:19.070 "write_zeroes": true, 00:12:19.070 "zcopy": true, 00:12:19.070 "get_zone_info": false, 00:12:19.070 "zone_management": false, 00:12:19.070 "zone_append": false, 00:12:19.070 "compare": false, 00:12:19.070 "compare_and_write": false, 00:12:19.070 "abort": true, 00:12:19.070 "seek_hole": false, 00:12:19.070 "seek_data": false, 00:12:19.070 "copy": true, 00:12:19.070 "nvme_iov_md": false 00:12:19.070 }, 00:12:19.070 "memory_domains": [ 00:12:19.070 { 00:12:19.070 "dma_device_id": "system", 00:12:19.070 "dma_device_type": 1 00:12:19.070 }, 00:12:19.070 { 00:12:19.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.070 "dma_device_type": 2 00:12:19.070 } 00:12:19.070 ], 00:12:19.070 "driver_specific": {} 00:12:19.070 } 00:12:19.070 ]' 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:19.070 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.636 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.636 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.636 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.636 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:19.636 18:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:22.167 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:22.425 18:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:23.356 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:23.357 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:23.357 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:23.357 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.357 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.357 ************************************ 00:12:23.357 START TEST filesystem_in_capsule_ext4 00:12:23.357 ************************************ 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:12:23.357 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:23.357 mke2fs 1.46.5 (30-Dec-2021) 00:12:23.614 Discarding device blocks: 0/522240 done 00:12:23.614 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:23.614 Filesystem UUID: 9c8f1cdb-ef41-499f-ae00-a4f3ad739314 00:12:23.614 Superblock backups stored on blocks: 00:12:23.614 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:23.614 00:12:23.614 Allocating group tables: 0/64 done 00:12:23.614 Writing inode tables: 0/64 done 00:12:23.614 Creating journal (8192 blocks): done 00:12:23.871 Writing superblocks and filesystem accounting information: 0/64 done 00:12:23.871 00:12:23.871 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:12:23.871 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2285613 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.802 00:12:24.802 real 0m1.220s 00:12:24.802 user 0m0.007s 00:12:24.802 sys 0m0.059s 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:24.802 ************************************ 00:12:24.802 END TEST filesystem_in_capsule_ext4 00:12:24.802 ************************************ 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.802 ************************************ 00:12:24.802 START TEST filesystem_in_capsule_btrfs 00:12:24.802 ************************************ 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:12:24.802 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.060 btrfs-progs v6.6.2 00:12:25.060 See https://btrfs.readthedocs.io for more information. 00:12:25.060 00:12:25.060 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.060 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.060 this does not affect your deployments: 00:12:25.060 - DUP for metadata (-m dup) 00:12:25.060 - enabled no-holes (-O no-holes) 00:12:25.060 - enabled free-space-tree (-R free-space-tree) 00:12:25.060 00:12:25.060 Label: (null) 00:12:25.060 UUID: d63de6c2-592a-43cd-9162-e81927226cd6 00:12:25.060 Node size: 16384 00:12:25.060 Sector size: 4096 00:12:25.060 Filesystem size: 510.00MiB 00:12:25.060 Block group profiles: 00:12:25.060 Data: single 8.00MiB 00:12:25.060 Metadata: DUP 32.00MiB 00:12:25.060 System: DUP 8.00MiB 00:12:25.060 SSD detected: yes 00:12:25.060 Zoned device: no 00:12:25.060 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.060 Runtime features: free-space-tree 00:12:25.060 Checksum: crc32c 00:12:25.060 Number of devices: 1 00:12:25.060 Devices: 00:12:25.060 ID SIZE PATH 00:12:25.060 1 510.00MiB /dev/nvme0n1p1 00:12:25.060 00:12:25.060 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:12:25.060 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2285613 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.625 00:12:25.625 real 0m0.988s 00:12:25.625 user 0m0.022s 00:12:25.625 sys 0m0.113s 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.625 ************************************ 00:12:25.625 END TEST filesystem_in_capsule_btrfs 00:12:25.625 ************************************ 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.625 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.883 ************************************ 00:12:25.883 START TEST filesystem_in_capsule_xfs 00:12:25.883 ************************************ 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:12:25.883 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.883 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.883 = sectsz=512 attr=2, projid32bit=1 00:12:25.883 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.883 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.883 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.883 = sunit=0 swidth=0 blks 00:12:25.883 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.883 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.883 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.883 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.846 Discarding blocks...Done. 00:12:26.846 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:12:26.846 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.373 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.374 00:12:29.374 real 0m3.218s 00:12:29.374 user 0m0.015s 00:12:29.374 sys 0m0.058s 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.374 ************************************ 00:12:29.374 END TEST filesystem_in_capsule_xfs 00:12:29.374 ************************************ 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2285613 ']' 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2285613' 00:12:29.374 killing process with pid 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2285613 00:12:29.374 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2285613 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.632 00:12:29.632 real 0m11.070s 00:12:29.632 user 0m42.449s 00:12:29.632 sys 0m1.684s 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.632 ************************************ 00:12:29.632 END TEST nvmf_filesystem_in_capsule 00:12:29.632 ************************************ 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.632 rmmod nvme_tcp 00:12:29.632 rmmod nvme_fabrics 00:12:29.632 rmmod nvme_keyring 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.632 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.166 00:12:32.166 real 0m27.849s 00:12:32.166 user 1m30.146s 00:12:32.166 sys 0m5.132s 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 ************************************ 00:12:32.166 END TEST nvmf_filesystem 00:12:32.166 ************************************ 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 ************************************ 00:12:32.166 START TEST nvmf_target_discovery 00:12:32.166 ************************************ 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:32.166 * Looking for test storage... 00:12:32.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.166 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.167 18:00:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:12:34.073 00:12:34.073 --- 10.0.0.2 ping statistics --- 00:12:34.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.073 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:34.073 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:12:34.073 00:12:34.073 --- 10.0.0.1 ping statistics --- 00:12:34.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.073 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2288970 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2288970 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2288970 ']' 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.074 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.074 [2024-07-23 18:00:41.727675] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:12:34.074 [2024-07-23 18:00:41.727774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.332 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.332 [2024-07-23 18:00:41.794726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.332 [2024-07-23 18:00:41.875221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.332 [2024-07-23 18:00:41.875279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.332 [2024-07-23 18:00:41.875325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.332 [2024-07-23 18:00:41.875339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.332 [2024-07-23 18:00:41.875349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.332 [2024-07-23 18:00:41.875427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.332 [2024-07-23 18:00:41.875499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.332 [2024-07-23 18:00:41.875556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.332 [2024-07-23 18:00:41.875558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.590 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.590 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:12:34.590 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.590 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.590 18:00:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 [2024-07-23 18:00:42.023524] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 Null1 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 [2024-07-23 18:00:42.063845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 Null2 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 Null3 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 Null4 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:34.590 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.591 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:34.848 00:12:34.848 Discovery Log Number of Records 6, Generation counter 6 00:12:34.848 =====Discovery Log Entry 0====== 00:12:34.848 trtype: tcp 00:12:34.848 adrfam: ipv4 00:12:34.848 subtype: current discovery subsystem 00:12:34.848 treq: not required 00:12:34.848 portid: 0 00:12:34.848 trsvcid: 4420 00:12:34.848 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:34.848 traddr: 10.0.0.2 00:12:34.848 eflags: explicit discovery connections, duplicate discovery information 00:12:34.848 sectype: none 00:12:34.848 =====Discovery Log Entry 1====== 00:12:34.848 trtype: tcp 00:12:34.848 adrfam: ipv4 00:12:34.848 subtype: nvme subsystem 00:12:34.848 treq: not required 00:12:34.848 portid: 0 00:12:34.848 trsvcid: 4420 00:12:34.848 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:34.848 traddr: 10.0.0.2 00:12:34.848 eflags: none 00:12:34.848 sectype: none 00:12:34.848 =====Discovery Log Entry 2====== 00:12:34.848 trtype: tcp 00:12:34.848 adrfam: ipv4 00:12:34.848 subtype: nvme subsystem 00:12:34.848 treq: not required 00:12:34.848 portid: 0 00:12:34.848 trsvcid: 4420 00:12:34.848 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:34.848 traddr: 10.0.0.2 00:12:34.848 eflags: none 00:12:34.848 sectype: none 00:12:34.848 =====Discovery Log Entry 3====== 00:12:34.848 trtype: tcp 00:12:34.848 adrfam: ipv4 00:12:34.848 subtype: nvme subsystem 00:12:34.848 treq: not required 00:12:34.848 portid: 0 00:12:34.848 trsvcid: 4420 00:12:34.848 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:34.848 traddr: 10.0.0.2 00:12:34.848 eflags: none 00:12:34.848 sectype: none 00:12:34.848 =====Discovery Log Entry 4====== 00:12:34.849 trtype: tcp 00:12:34.849 adrfam: ipv4 00:12:34.849 subtype: nvme subsystem 00:12:34.849 treq: not required 00:12:34.849 portid: 0 00:12:34.849 trsvcid: 4420 00:12:34.849 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:34.849 traddr: 10.0.0.2 00:12:34.849 eflags: none 00:12:34.849 sectype: none 00:12:34.849 =====Discovery Log Entry 5====== 00:12:34.849 trtype: tcp 00:12:34.849 adrfam: ipv4 00:12:34.849 subtype: discovery subsystem referral 00:12:34.849 treq: not required 00:12:34.849 portid: 0 00:12:34.849 trsvcid: 4430 00:12:34.849 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:34.849 traddr: 10.0.0.2 00:12:34.849 eflags: none 00:12:34.849 sectype: none 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:34.849 Perform nvmf subsystem discovery via RPC 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 [ 00:12:34.849 { 00:12:34.849 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:34.849 "subtype": "Discovery", 00:12:34.849 "listen_addresses": [ 00:12:34.849 { 00:12:34.849 "trtype": "TCP", 00:12:34.849 "adrfam": "IPv4", 00:12:34.849 "traddr": "10.0.0.2", 00:12:34.849 "trsvcid": "4420" 00:12:34.849 } 00:12:34.849 ], 00:12:34.849 "allow_any_host": true, 00:12:34.849 "hosts": [] 00:12:34.849 }, 00:12:34.849 { 00:12:34.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.849 "subtype": "NVMe", 00:12:34.849 "listen_addresses": [ 00:12:34.849 { 00:12:34.849 "trtype": "TCP", 00:12:34.849 "adrfam": "IPv4", 00:12:34.849 "traddr": "10.0.0.2", 00:12:34.849 "trsvcid": "4420" 00:12:34.849 } 00:12:34.849 ], 00:12:34.849 "allow_any_host": true, 00:12:34.849 "hosts": [], 00:12:34.849 "serial_number": "SPDK00000000000001", 00:12:34.849 "model_number": "SPDK bdev Controller", 00:12:34.849 "max_namespaces": 32, 00:12:34.849 "min_cntlid": 1, 00:12:34.849 "max_cntlid": 65519, 00:12:34.849 "namespaces": [ 00:12:34.849 { 00:12:34.849 "nsid": 1, 00:12:34.849 "bdev_name": "Null1", 00:12:34.849 "name": "Null1", 00:12:34.849 "nguid": "F171D77C66FC4A4C9D397964B3B7D5E3", 00:12:34.849 "uuid": "f171d77c-66fc-4a4c-9d39-7964b3b7d5e3" 00:12:34.849 } 00:12:34.849 ] 00:12:34.849 }, 00:12:34.849 { 00:12:34.849 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:34.849 "subtype": "NVMe", 00:12:34.849 "listen_addresses": [ 00:12:34.849 { 00:12:34.849 "trtype": "TCP", 00:12:34.849 "adrfam": "IPv4", 00:12:34.849 "traddr": "10.0.0.2", 00:12:34.849 "trsvcid": "4420" 00:12:34.849 } 00:12:34.849 ], 00:12:34.849 "allow_any_host": true, 00:12:34.849 "hosts": [], 00:12:34.849 "serial_number": "SPDK00000000000002", 00:12:34.849 "model_number": "SPDK bdev Controller", 00:12:34.849 "max_namespaces": 32, 00:12:34.849 "min_cntlid": 1, 00:12:34.849 "max_cntlid": 65519, 00:12:34.849 "namespaces": [ 00:12:34.849 { 00:12:34.849 "nsid": 1, 00:12:34.849 "bdev_name": "Null2", 00:12:34.849 "name": "Null2", 00:12:34.849 "nguid": "842F4325D55D4DABA890370ED421AECC", 00:12:34.849 "uuid": "842f4325-d55d-4dab-a890-370ed421aecc" 00:12:34.849 } 00:12:34.849 ] 00:12:34.849 }, 00:12:34.849 { 00:12:34.849 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:34.849 "subtype": "NVMe", 00:12:34.849 "listen_addresses": [ 00:12:34.849 { 00:12:34.849 "trtype": "TCP", 00:12:34.849 "adrfam": "IPv4", 00:12:34.849 "traddr": "10.0.0.2", 00:12:34.849 "trsvcid": "4420" 00:12:34.849 } 00:12:34.849 ], 00:12:34.849 "allow_any_host": true, 00:12:34.849 "hosts": [], 00:12:34.849 "serial_number": "SPDK00000000000003", 00:12:34.849 "model_number": "SPDK bdev Controller", 00:12:34.849 "max_namespaces": 32, 00:12:34.849 "min_cntlid": 1, 00:12:34.849 "max_cntlid": 65519, 00:12:34.849 "namespaces": [ 00:12:34.849 { 00:12:34.849 "nsid": 1, 00:12:34.849 "bdev_name": "Null3", 00:12:34.849 "name": "Null3", 00:12:34.849 "nguid": "155F7A5177164FA99D8E8B882FCB1A40", 00:12:34.849 "uuid": "155f7a51-7716-4fa9-9d8e-8b882fcb1a40" 00:12:34.849 } 00:12:34.849 ] 00:12:34.849 }, 00:12:34.849 { 00:12:34.849 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:34.849 "subtype": "NVMe", 00:12:34.849 "listen_addresses": [ 00:12:34.849 { 00:12:34.849 "trtype": "TCP", 00:12:34.849 "adrfam": "IPv4", 00:12:34.849 "traddr": "10.0.0.2", 00:12:34.849 "trsvcid": "4420" 00:12:34.849 } 00:12:34.849 ], 00:12:34.849 "allow_any_host": true, 00:12:34.849 "hosts": [], 00:12:34.849 "serial_number": "SPDK00000000000004", 00:12:34.849 "model_number": "SPDK bdev Controller", 00:12:34.849 "max_namespaces": 32, 00:12:34.849 "min_cntlid": 1, 00:12:34.849 "max_cntlid": 65519, 00:12:34.849 "namespaces": [ 00:12:34.849 { 00:12:34.849 "nsid": 1, 00:12:34.849 "bdev_name": "Null4", 00:12:34.849 "name": "Null4", 00:12:34.849 "nguid": "CAFC2FD262F34457AE90956C70546EC4", 00:12:34.849 "uuid": "cafc2fd2-62f3-4457-ae90-956c70546ec4" 00:12:34.849 } 00:12:34.849 ] 00:12:34.849 } 00:12:34.849 ] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.107 rmmod nvme_tcp 00:12:35.107 rmmod nvme_fabrics 00:12:35.107 rmmod nvme_keyring 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2288970 ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2288970 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2288970 ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2288970 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2288970 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.107 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2288970' 00:12:35.107 killing process with pid 2288970 00:12:35.108 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2288970 00:12:35.108 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2288970 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.367 18:00:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.268 00:12:37.268 real 0m5.558s 00:12:37.268 user 0m4.565s 00:12:37.268 sys 0m1.923s 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.268 ************************************ 00:12:37.268 END TEST nvmf_target_discovery 00:12:37.268 ************************************ 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.268 ************************************ 00:12:37.268 START TEST nvmf_referrals 00:12:37.268 ************************************ 00:12:37.268 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:37.526 * Looking for test storage... 00:12:37.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.526 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.527 18:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:39.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.426 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:39.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:39.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:39.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.427 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.685 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:12:39.686 00:12:39.686 --- 10.0.0.2 ping statistics --- 00:12:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.686 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:12:39.686 00:12:39.686 --- 10.0.0.1 ping statistics --- 00:12:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.686 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2291054 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2291054 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2291054 ']' 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.686 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.686 [2024-07-23 18:00:47.230565] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:12:39.686 [2024-07-23 18:00:47.230673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.686 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.686 [2024-07-23 18:00:47.298407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.943 [2024-07-23 18:00:47.388925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.943 [2024-07-23 18:00:47.388986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.944 [2024-07-23 18:00:47.388998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.944 [2024-07-23 18:00:47.389009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.944 [2024-07-23 18:00:47.389018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.944 [2024-07-23 18:00:47.389079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.944 [2024-07-23 18:00:47.389142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.944 [2024-07-23 18:00:47.389215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.944 [2024-07-23 18:00:47.389218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 [2024-07-23 18:00:47.544838] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 [2024-07-23 18:00:47.557086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.201 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.459 18:00:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.459 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.716 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.974 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.231 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:41.231 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:41.231 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:41.231 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.232 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.489 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.489 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.746 rmmod nvme_tcp 00:12:41.746 rmmod nvme_fabrics 00:12:41.746 rmmod nvme_keyring 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2291054 ']' 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2291054 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2291054 ']' 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2291054 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2291054 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2291054' 00:12:41.746 killing process with pid 2291054 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2291054 00:12:41.746 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2291054 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.003 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.004 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.910 00:12:43.910 real 0m6.613s 00:12:43.910 user 0m9.726s 00:12:43.910 sys 0m2.112s 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.910 ************************************ 00:12:43.910 END TEST nvmf_referrals 00:12:43.910 ************************************ 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.910 ************************************ 00:12:43.910 START TEST nvmf_connect_disconnect 00:12:43.910 ************************************ 00:12:43.910 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:44.169 * Looking for test storage... 00:12:44.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.169 18:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.103 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:46.361 00:12:46.361 --- 10.0.0.2 ping statistics --- 00:12:46.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.361 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:12:46.361 00:12:46.361 --- 10.0.0.1 ping statistics --- 00:12:46.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.361 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2293344 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2293344 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2293344 ']' 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.361 18:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.361 [2024-07-23 18:00:53.913995] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:12:46.361 [2024-07-23 18:00:53.914070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.361 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.361 [2024-07-23 18:00:53.977950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.618 [2024-07-23 18:00:54.067766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.618 [2024-07-23 18:00:54.067817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.618 [2024-07-23 18:00:54.067830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.618 [2024-07-23 18:00:54.067856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.618 [2024-07-23 18:00:54.067866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.618 [2024-07-23 18:00:54.067945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.618 [2024-07-23 18:00:54.068011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.618 [2024-07-23 18:00:54.068078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.618 [2024-07-23 18:00:54.068080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 [2024-07-23 18:00:54.219733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.618 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.618 [2024-07-23 18:00:54.277016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.875 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.875 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:46.875 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:46.875 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:46.875 18:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:49.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.607 rmmod nvme_tcp 00:16:37.607 rmmod nvme_fabrics 00:16:37.607 rmmod nvme_keyring 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2293344 ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2293344 ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2293344' 00:16:37.607 killing process with pid 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2293344 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.607 18:04:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.513 00:16:39.513 real 3m55.473s 00:16:39.513 user 14m55.569s 00:16:39.513 sys 0m35.413s 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:39.513 ************************************ 00:16:39.513 END TEST nvmf_connect_disconnect 00:16:39.513 ************************************ 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.513 ************************************ 00:16:39.513 START TEST nvmf_multitarget 00:16:39.513 ************************************ 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:39.513 * Looking for test storage... 00:16:39.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.513 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.514 18:04:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:42.043 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:42.043 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:42.043 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:42.043 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.043 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:16:42.044 00:16:42.044 --- 10.0.0.2 ping statistics --- 00:16:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.044 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:42.044 00:16:42.044 --- 10.0.0.1 ping statistics --- 00:16:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.044 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2324198 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2324198 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2324198 ']' 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 [2024-07-23 18:04:49.364245] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:16:42.044 [2024-07-23 18:04:49.364346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.044 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.044 [2024-07-23 18:04:49.430482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.044 [2024-07-23 18:04:49.515370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.044 [2024-07-23 18:04:49.515427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.044 [2024-07-23 18:04:49.515454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.044 [2024-07-23 18:04:49.515464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.044 [2024-07-23 18:04:49.515474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.044 [2024-07-23 18:04:49.515589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.044 [2024-07-23 18:04:49.515694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.044 [2024-07-23 18:04:49.515987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.044 [2024-07-23 18:04:49.515990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:42.044 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:42.301 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:42.301 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:42.301 "nvmf_tgt_1" 00:16:42.301 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:42.559 "nvmf_tgt_2" 00:16:42.559 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:42.559 18:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:42.559 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:42.559 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:42.559 true 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:42.816 true 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.816 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.816 rmmod nvme_tcp 00:16:42.816 rmmod nvme_fabrics 00:16:42.816 rmmod nvme_keyring 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2324198 ']' 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2324198 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2324198 ']' 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2324198 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2324198 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2324198' 00:16:43.075 killing process with pid 2324198 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2324198 00:16:43.075 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2324198 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.334 18:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.242 00:16:45.242 real 0m5.686s 00:16:45.242 user 0m6.313s 00:16:45.242 sys 0m1.897s 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 ************************************ 00:16:45.242 END TEST nvmf_multitarget 00:16:45.242 ************************************ 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 ************************************ 00:16:45.242 START TEST nvmf_rpc 00:16:45.242 ************************************ 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:45.242 * Looking for test storage... 00:16:45.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.242 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.243 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.501 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.501 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.501 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.501 18:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.404 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:47.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:47.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:47.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:47.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.405 18:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:16:47.405 00:16:47.405 --- 10.0.0.2 ping statistics --- 00:16:47.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.405 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:16:47.405 00:16:47.405 --- 10.0.0.1 ping statistics --- 00:16:47.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.405 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2326291 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2326291 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2326291 ']' 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.405 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.664 [2024-07-23 18:04:55.085652] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:16:47.664 [2024-07-23 18:04:55.085740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.664 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.664 [2024-07-23 18:04:55.149570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.664 [2024-07-23 18:04:55.234399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.664 [2024-07-23 18:04:55.234458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.664 [2024-07-23 18:04:55.234486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.664 [2024-07-23 18:04:55.234496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.664 [2024-07-23 18:04:55.234505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.664 [2024-07-23 18:04:55.234647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.664 [2024-07-23 18:04:55.234705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.664 [2024-07-23 18:04:55.234771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.664 [2024-07-23 18:04:55.234774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:47.922 "tick_rate": 2700000000, 00:16:47.922 "poll_groups": [ 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_000", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_001", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_002", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_003", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [] 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 }' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.922 [2024-07-23 18:04:55.475038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:47.922 "tick_rate": 2700000000, 00:16:47.922 "poll_groups": [ 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_000", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [ 00:16:47.922 { 00:16:47.922 "trtype": "TCP" 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_001", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [ 00:16:47.922 { 00:16:47.922 "trtype": "TCP" 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_002", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [ 00:16:47.922 { 00:16:47.922 "trtype": "TCP" 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 }, 00:16:47.922 { 00:16:47.922 "name": "nvmf_tgt_poll_group_003", 00:16:47.922 "admin_qpairs": 0, 00:16:47.922 "io_qpairs": 0, 00:16:47.922 "current_admin_qpairs": 0, 00:16:47.922 "current_io_qpairs": 0, 00:16:47.922 "pending_bdev_io": 0, 00:16:47.922 "completed_nvme_io": 0, 00:16:47.922 "transports": [ 00:16:47.922 { 00:16:47.922 "trtype": "TCP" 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 } 00:16:47.922 ] 00:16:47.922 }' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.922 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 Malloc1 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 [2024-07-23 18:04:55.632415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:48.181 [2024-07-23 18:04:55.654935] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:48.181 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:48.181 could not add new controller: failed to write to nvme-fabrics device 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.181 18:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.746 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.746 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:48.746 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.746 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:48.746 18:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.273 [2024-07-23 18:04:58.484208] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:51.273 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:51.273 could not add new controller: failed to write to nvme-fabrics device 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.273 18:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.533 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.533 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.533 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.533 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:51.533 18:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.059 [2024-07-23 18:05:01.297408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.059 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.060 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.318 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.318 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:54.318 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.318 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:54.318 18:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:56.841 18:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:56.841 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 [2024-07-23 18:05:04.072207] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.842 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.099 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.099 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:57.099 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.099 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:57.099 18:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 [2024-07-23 18:05:06.794413] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.625 18:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.883 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:59.883 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:59.883 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.883 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:59.883 18:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:01.780 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 [2024-07-23 18:05:09.571077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.038 18:05:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.972 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.972 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.972 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.972 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:02.972 18:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.868 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 [2024-07-23 18:05:12.420051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 18:05:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.433 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.433 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:05.433 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.433 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:05.433 18:05:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.997 [2024-07-23 18:05:15.208577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.997 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 [2024-07-23 18:05:15.256606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 [2024-07-23 18:05:15.304774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 [2024-07-23 18:05:15.352900] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 [2024-07-23 18:05:15.401063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.998 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:07.999 "tick_rate": 2700000000, 00:17:07.999 "poll_groups": [ 00:17:07.999 { 00:17:07.999 "name": "nvmf_tgt_poll_group_000", 00:17:07.999 "admin_qpairs": 2, 00:17:07.999 "io_qpairs": 84, 00:17:07.999 "current_admin_qpairs": 0, 00:17:07.999 "current_io_qpairs": 0, 00:17:07.999 "pending_bdev_io": 0, 00:17:07.999 "completed_nvme_io": 233, 00:17:07.999 "transports": [ 00:17:07.999 { 00:17:07.999 "trtype": "TCP" 00:17:07.999 } 00:17:07.999 ] 00:17:07.999 }, 00:17:07.999 { 00:17:07.999 "name": "nvmf_tgt_poll_group_001", 00:17:07.999 "admin_qpairs": 2, 00:17:07.999 "io_qpairs": 84, 00:17:07.999 "current_admin_qpairs": 0, 00:17:07.999 "current_io_qpairs": 0, 00:17:07.999 "pending_bdev_io": 0, 00:17:07.999 "completed_nvme_io": 181, 00:17:07.999 "transports": [ 00:17:07.999 { 00:17:07.999 "trtype": "TCP" 00:17:07.999 } 00:17:07.999 ] 00:17:07.999 }, 00:17:07.999 { 00:17:07.999 "name": "nvmf_tgt_poll_group_002", 00:17:07.999 "admin_qpairs": 1, 00:17:07.999 "io_qpairs": 84, 00:17:07.999 "current_admin_qpairs": 0, 00:17:07.999 "current_io_qpairs": 0, 00:17:07.999 "pending_bdev_io": 0, 00:17:07.999 "completed_nvme_io": 137, 00:17:07.999 "transports": [ 00:17:07.999 { 00:17:07.999 "trtype": "TCP" 00:17:07.999 } 00:17:07.999 ] 00:17:07.999 }, 00:17:07.999 { 00:17:07.999 "name": "nvmf_tgt_poll_group_003", 00:17:07.999 "admin_qpairs": 2, 00:17:07.999 "io_qpairs": 84, 00:17:07.999 "current_admin_qpairs": 0, 00:17:07.999 "current_io_qpairs": 0, 00:17:07.999 "pending_bdev_io": 0, 00:17:07.999 "completed_nvme_io": 135, 00:17:07.999 "transports": [ 00:17:07.999 { 00:17:07.999 "trtype": "TCP" 00:17:07.999 } 00:17:07.999 ] 00:17:07.999 } 00:17:07.999 ] 00:17:07.999 }' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.999 rmmod nvme_tcp 00:17:07.999 rmmod nvme_fabrics 00:17:07.999 rmmod nvme_keyring 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2326291 ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2326291 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2326291 ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2326291 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326291 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326291' 00:17:07.999 killing process with pid 2326291 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2326291 00:17:07.999 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2326291 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.259 18:05:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.791 00:17:10.791 real 0m25.076s 00:17:10.791 user 1m21.669s 00:17:10.791 sys 0m4.084s 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.791 ************************************ 00:17:10.791 END TEST nvmf_rpc 00:17:10.791 ************************************ 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.791 ************************************ 00:17:10.791 START TEST nvmf_invalid 00:17:10.791 ************************************ 00:17:10.791 18:05:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:10.791 * Looking for test storage... 00:17:10.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.791 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.792 18:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.692 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:17:12.693 00:17:12.693 --- 10.0.0.2 ping statistics --- 00:17:12.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.693 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:17:12.693 00:17:12.693 --- 10.0.0.1 ping statistics --- 00:17:12.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.693 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2330889 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2330889 00:17:12.693 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2330889 ']' 00:17:12.694 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.694 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.694 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.694 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.694 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.951 [2024-07-23 18:05:20.357586] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:12.951 [2024-07-23 18:05:20.357686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.951 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.951 [2024-07-23 18:05:20.425106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.951 [2024-07-23 18:05:20.511614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.951 [2024-07-23 18:05:20.511665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.951 [2024-07-23 18:05:20.511694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.951 [2024-07-23 18:05:20.511705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.951 [2024-07-23 18:05:20.511715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.951 [2024-07-23 18:05:20.511764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.951 [2024-07-23 18:05:20.511842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.951 [2024-07-23 18:05:20.511905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.951 [2024-07-23 18:05:20.511901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:13.208 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14924 00:17:13.465 [2024-07-23 18:05:20.879355] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:13.465 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:13.465 { 00:17:13.465 "nqn": "nqn.2016-06.io.spdk:cnode14924", 00:17:13.465 "tgt_name": "foobar", 00:17:13.465 "method": "nvmf_create_subsystem", 00:17:13.465 "req_id": 1 00:17:13.465 } 00:17:13.465 Got JSON-RPC error response 00:17:13.465 response: 00:17:13.465 { 00:17:13.465 "code": -32603, 00:17:13.465 "message": "Unable to find target foobar" 00:17:13.465 }' 00:17:13.465 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:13.465 { 00:17:13.465 "nqn": "nqn.2016-06.io.spdk:cnode14924", 00:17:13.465 "tgt_name": "foobar", 00:17:13.465 "method": "nvmf_create_subsystem", 00:17:13.465 "req_id": 1 00:17:13.465 } 00:17:13.465 Got JSON-RPC error response 00:17:13.465 response: 00:17:13.465 { 00:17:13.465 "code": -32603, 00:17:13.465 "message": "Unable to find target foobar" 00:17:13.465 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:13.465 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:13.465 18:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9615 00:17:13.465 [2024-07-23 18:05:21.124164] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9615: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:13.721 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:13.721 { 00:17:13.721 "nqn": "nqn.2016-06.io.spdk:cnode9615", 00:17:13.721 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:13.721 "method": "nvmf_create_subsystem", 00:17:13.722 "req_id": 1 00:17:13.722 } 00:17:13.722 Got JSON-RPC error response 00:17:13.722 response: 00:17:13.722 { 00:17:13.722 "code": -32602, 00:17:13.722 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:13.722 }' 00:17:13.722 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:13.722 { 00:17:13.722 "nqn": "nqn.2016-06.io.spdk:cnode9615", 00:17:13.722 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:13.722 "method": "nvmf_create_subsystem", 00:17:13.722 "req_id": 1 00:17:13.722 } 00:17:13.722 Got JSON-RPC error response 00:17:13.722 response: 00:17:13.722 { 00:17:13.722 "code": -32602, 00:17:13.722 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:13.722 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:13.722 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:13.722 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2138 00:17:13.722 [2024-07-23 18:05:21.376999] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2138: invalid model number 'SPDK_Controller' 00:17:13.979 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:13.979 { 00:17:13.979 "nqn": "nqn.2016-06.io.spdk:cnode2138", 00:17:13.979 "model_number": "SPDK_Controller\u001f", 00:17:13.979 "method": "nvmf_create_subsystem", 00:17:13.979 "req_id": 1 00:17:13.979 } 00:17:13.979 Got JSON-RPC error response 00:17:13.979 response: 00:17:13.979 { 00:17:13.979 "code": -32602, 00:17:13.979 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.979 }' 00:17:13.979 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:13.979 { 00:17:13.979 "nqn": "nqn.2016-06.io.spdk:cnode2138", 00:17:13.979 "model_number": "SPDK_Controller\u001f", 00:17:13.979 "method": "nvmf_create_subsystem", 00:17:13.979 "req_id": 1 00:17:13.979 } 00:17:13.979 Got JSON-RPC error response 00:17:13.979 response: 00:17:13.979 { 00:17:13.979 "code": -32602, 00:17:13.979 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.979 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.980 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'I&bh,fV-VDIr{>/rXx[^' 00:17:13.981 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I&bh,fV-VDIr{>/rXx[^' nqn.2016-06.io.spdk:cnode31526 00:17:14.239 [2024-07-23 18:05:21.698081] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31526: invalid serial number 'I&bh,fV-VDIr{>/rXx[^' 00:17:14.239 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:14.239 { 00:17:14.239 "nqn": "nqn.2016-06.io.spdk:cnode31526", 00:17:14.239 "serial_number": "I&bh,f\u007fV-VDIr{>/rXx[^", 00:17:14.239 "method": "nvmf_create_subsystem", 00:17:14.239 "req_id": 1 00:17:14.239 } 00:17:14.239 Got JSON-RPC error response 00:17:14.239 response: 00:17:14.239 { 00:17:14.239 "code": -32602, 00:17:14.239 "message": "Invalid SN I&bh,f\u007fV-VDIr{>/rXx[^" 00:17:14.239 }' 00:17:14.239 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:14.239 { 00:17:14.239 "nqn": "nqn.2016-06.io.spdk:cnode31526", 00:17:14.239 "serial_number": "I&bh,f\u007fV-VDIr{>/rXx[^", 00:17:14.239 "method": "nvmf_create_subsystem", 00:17:14.239 "req_id": 1 00:17:14.239 } 00:17:14.239 Got JSON-RPC error response 00:17:14.239 response: 00:17:14.239 { 00:17:14.239 "code": -32602, 00:17:14.239 "message": "Invalid SN I&bh,f\u007fV-VDIr{>/rXx[^" 00:17:14.239 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:14.239 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:14.239 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:14.239 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.240 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\l' 00:17:14.241 18:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\l' nqn.2016-06.io.spdk:cnode8135 00:17:14.499 [2024-07-23 18:05:22.059256] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8135: invalid model number ''jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\l' 00:17:14.499 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:14.499 { 00:17:14.499 "nqn": "nqn.2016-06.io.spdk:cnode8135", 00:17:14.499 "model_number": "'\''jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\\l", 00:17:14.499 "method": "nvmf_create_subsystem", 00:17:14.499 "req_id": 1 00:17:14.499 } 00:17:14.499 Got JSON-RPC error response 00:17:14.499 response: 00:17:14.499 { 00:17:14.499 "code": -32602, 00:17:14.499 "message": "Invalid MN '\''jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\\l" 00:17:14.499 }' 00:17:14.499 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:14.499 { 00:17:14.499 "nqn": "nqn.2016-06.io.spdk:cnode8135", 00:17:14.499 "model_number": "'jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\\l", 00:17:14.499 "method": "nvmf_create_subsystem", 00:17:14.499 "req_id": 1 00:17:14.499 } 00:17:14.499 Got JSON-RPC error response 00:17:14.499 response: 00:17:14.499 { 00:17:14.499 "code": -32602, 00:17:14.499 "message": "Invalid MN 'jO?N1gHtv]mdu.s7)UL@e!7dpp|*v9pbch5$13\\l" 00:17:14.499 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:14.499 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:14.756 [2024-07-23 18:05:22.296151] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.756 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:15.014 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:15.014 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:15.014 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:15.014 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:15.014 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:15.271 [2024-07-23 18:05:22.813813] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:15.271 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:15.271 { 00:17:15.271 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:15.271 "listen_address": { 00:17:15.271 "trtype": "tcp", 00:17:15.271 "traddr": "", 00:17:15.271 "trsvcid": "4421" 00:17:15.271 }, 00:17:15.271 "method": "nvmf_subsystem_remove_listener", 00:17:15.271 "req_id": 1 00:17:15.271 } 00:17:15.271 Got JSON-RPC error response 00:17:15.271 response: 00:17:15.271 { 00:17:15.271 "code": -32602, 00:17:15.271 "message": "Invalid parameters" 00:17:15.271 }' 00:17:15.271 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:15.271 { 00:17:15.271 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:15.271 "listen_address": { 00:17:15.271 "trtype": "tcp", 00:17:15.271 "traddr": "", 00:17:15.271 "trsvcid": "4421" 00:17:15.271 }, 00:17:15.271 "method": "nvmf_subsystem_remove_listener", 00:17:15.271 "req_id": 1 00:17:15.271 } 00:17:15.271 Got JSON-RPC error response 00:17:15.271 response: 00:17:15.271 { 00:17:15.271 "code": -32602, 00:17:15.271 "message": "Invalid parameters" 00:17:15.271 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:15.271 18:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode909 -i 0 00:17:15.529 [2024-07-23 18:05:23.062582] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode909: invalid cntlid range [0-65519] 00:17:15.529 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:15.529 { 00:17:15.529 "nqn": "nqn.2016-06.io.spdk:cnode909", 00:17:15.529 "min_cntlid": 0, 00:17:15.529 "method": "nvmf_create_subsystem", 00:17:15.529 "req_id": 1 00:17:15.529 } 00:17:15.529 Got JSON-RPC error response 00:17:15.529 response: 00:17:15.529 { 00:17:15.529 "code": -32602, 00:17:15.529 "message": "Invalid cntlid range [0-65519]" 00:17:15.529 }' 00:17:15.529 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:15.529 { 00:17:15.529 "nqn": "nqn.2016-06.io.spdk:cnode909", 00:17:15.529 "min_cntlid": 0, 00:17:15.529 "method": "nvmf_create_subsystem", 00:17:15.529 "req_id": 1 00:17:15.529 } 00:17:15.529 Got JSON-RPC error response 00:17:15.529 response: 00:17:15.529 { 00:17:15.529 "code": -32602, 00:17:15.529 "message": "Invalid cntlid range [0-65519]" 00:17:15.529 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:15.529 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20018 -i 65520 00:17:15.786 [2024-07-23 18:05:23.319464] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20018: invalid cntlid range [65520-65519] 00:17:15.786 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:15.786 { 00:17:15.786 "nqn": "nqn.2016-06.io.spdk:cnode20018", 00:17:15.786 "min_cntlid": 65520, 00:17:15.786 "method": "nvmf_create_subsystem", 00:17:15.786 "req_id": 1 00:17:15.786 } 00:17:15.786 Got JSON-RPC error response 00:17:15.786 response: 00:17:15.786 { 00:17:15.786 "code": -32602, 00:17:15.786 "message": "Invalid cntlid range [65520-65519]" 00:17:15.786 }' 00:17:15.786 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:15.786 { 00:17:15.786 "nqn": "nqn.2016-06.io.spdk:cnode20018", 00:17:15.786 "min_cntlid": 65520, 00:17:15.786 "method": "nvmf_create_subsystem", 00:17:15.786 "req_id": 1 00:17:15.786 } 00:17:15.786 Got JSON-RPC error response 00:17:15.786 response: 00:17:15.786 { 00:17:15.786 "code": -32602, 00:17:15.786 "message": "Invalid cntlid range [65520-65519]" 00:17:15.786 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:15.786 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15419 -I 0 00:17:16.044 [2024-07-23 18:05:23.564287] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15419: invalid cntlid range [1-0] 00:17:16.044 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:16.044 { 00:17:16.044 "nqn": "nqn.2016-06.io.spdk:cnode15419", 00:17:16.044 "max_cntlid": 0, 00:17:16.044 "method": "nvmf_create_subsystem", 00:17:16.044 "req_id": 1 00:17:16.044 } 00:17:16.044 Got JSON-RPC error response 00:17:16.044 response: 00:17:16.044 { 00:17:16.044 "code": -32602, 00:17:16.044 "message": "Invalid cntlid range [1-0]" 00:17:16.044 }' 00:17:16.044 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:16.044 { 00:17:16.044 "nqn": "nqn.2016-06.io.spdk:cnode15419", 00:17:16.044 "max_cntlid": 0, 00:17:16.044 "method": "nvmf_create_subsystem", 00:17:16.044 "req_id": 1 00:17:16.044 } 00:17:16.044 Got JSON-RPC error response 00:17:16.044 response: 00:17:16.044 { 00:17:16.044 "code": -32602, 00:17:16.044 "message": "Invalid cntlid range [1-0]" 00:17:16.044 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:16.044 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25540 -I 65520 00:17:16.301 [2024-07-23 18:05:23.813147] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25540: invalid cntlid range [1-65520] 00:17:16.301 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:16.301 { 00:17:16.301 "nqn": "nqn.2016-06.io.spdk:cnode25540", 00:17:16.301 "max_cntlid": 65520, 00:17:16.301 "method": "nvmf_create_subsystem", 00:17:16.301 "req_id": 1 00:17:16.301 } 00:17:16.301 Got JSON-RPC error response 00:17:16.301 response: 00:17:16.301 { 00:17:16.301 "code": -32602, 00:17:16.301 "message": "Invalid cntlid range [1-65520]" 00:17:16.301 }' 00:17:16.301 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:16.301 { 00:17:16.301 "nqn": "nqn.2016-06.io.spdk:cnode25540", 00:17:16.301 "max_cntlid": 65520, 00:17:16.301 "method": "nvmf_create_subsystem", 00:17:16.301 "req_id": 1 00:17:16.301 } 00:17:16.301 Got JSON-RPC error response 00:17:16.301 response: 00:17:16.301 { 00:17:16.301 "code": -32602, 00:17:16.301 "message": "Invalid cntlid range [1-65520]" 00:17:16.301 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:16.301 18:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4581 -i 6 -I 5 00:17:16.559 [2024-07-23 18:05:24.066004] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4581: invalid cntlid range [6-5] 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:16.559 { 00:17:16.559 "nqn": "nqn.2016-06.io.spdk:cnode4581", 00:17:16.559 "min_cntlid": 6, 00:17:16.559 "max_cntlid": 5, 00:17:16.559 "method": "nvmf_create_subsystem", 00:17:16.559 "req_id": 1 00:17:16.559 } 00:17:16.559 Got JSON-RPC error response 00:17:16.559 response: 00:17:16.559 { 00:17:16.559 "code": -32602, 00:17:16.559 "message": "Invalid cntlid range [6-5]" 00:17:16.559 }' 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:16.559 { 00:17:16.559 "nqn": "nqn.2016-06.io.spdk:cnode4581", 00:17:16.559 "min_cntlid": 6, 00:17:16.559 "max_cntlid": 5, 00:17:16.559 "method": "nvmf_create_subsystem", 00:17:16.559 "req_id": 1 00:17:16.559 } 00:17:16.559 Got JSON-RPC error response 00:17:16.559 response: 00:17:16.559 { 00:17:16.559 "code": -32602, 00:17:16.559 "message": "Invalid cntlid range [6-5]" 00:17:16.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:16.559 { 00:17:16.559 "name": "foobar", 00:17:16.559 "method": "nvmf_delete_target", 00:17:16.559 "req_id": 1 00:17:16.559 } 00:17:16.559 Got JSON-RPC error response 00:17:16.559 response: 00:17:16.559 { 00:17:16.559 "code": -32602, 00:17:16.559 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:16.559 }' 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:16.559 { 00:17:16.559 "name": "foobar", 00:17:16.559 "method": "nvmf_delete_target", 00:17:16.559 "req_id": 1 00:17:16.559 } 00:17:16.559 Got JSON-RPC error response 00:17:16.559 response: 00:17:16.559 { 00:17:16.559 "code": -32602, 00:17:16.559 "message": "The specified target doesn't exist, cannot delete it." 00:17:16.559 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.559 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.559 rmmod nvme_tcp 00:17:16.559 rmmod nvme_fabrics 00:17:16.817 rmmod nvme_keyring 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2330889 ']' 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2330889 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2330889 ']' 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2330889 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330889 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330889' 00:17:16.817 killing process with pid 2330889 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2330889 00:17:16.817 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2330889 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.075 18:05:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.979 00:17:18.979 real 0m8.604s 00:17:18.979 user 0m19.565s 00:17:18.979 sys 0m2.460s 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.979 ************************************ 00:17:18.979 END TEST nvmf_invalid 00:17:18.979 ************************************ 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.979 ************************************ 00:17:18.979 START TEST nvmf_connect_stress 00:17:18.979 ************************************ 00:17:18.979 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:19.240 * Looking for test storage... 00:17:19.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:19.240 18:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.146 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:21.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.147 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.405 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:21.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:21.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:21.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:17:21.406 00:17:21.406 --- 10.0.0.2 ping statistics --- 00:17:21.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.406 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:17:21.406 00:17:21.406 --- 10.0.0.1 ping statistics --- 00:17:21.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.406 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2333411 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2333411 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2333411 ']' 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.406 18:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.406 [2024-07-23 18:05:29.023980] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:21.406 [2024-07-23 18:05:29.024061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.406 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.665 [2024-07-23 18:05:29.085520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.665 [2024-07-23 18:05:29.165479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.665 [2024-07-23 18:05:29.165538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.665 [2024-07-23 18:05:29.165566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.665 [2024-07-23 18:05:29.165577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.665 [2024-07-23 18:05:29.165587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.665 [2024-07-23 18:05:29.165672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.665 [2024-07-23 18:05:29.165741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.665 [2024-07-23 18:05:29.165739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.665 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 [2024-07-23 18:05:29.309601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.924 [2024-07-23 18:05:29.343015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.924 NULL1 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2333548 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.924 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.183 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.183 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:22.183 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.183 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.183 18:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.441 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.441 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:22.441 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.441 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.441 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.007 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.007 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:23.007 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.007 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.007 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.265 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.265 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:23.265 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.265 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.265 18:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.523 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.523 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:23.523 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.523 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.523 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.780 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.780 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:23.780 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.780 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.780 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.038 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.038 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:24.038 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.038 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.038 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.614 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.614 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:24.614 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.614 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.614 18:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.926 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.926 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:24.926 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.926 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.926 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.184 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.184 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:25.184 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.184 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.184 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.442 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.442 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:25.442 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.442 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.442 18:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.700 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.700 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:25.700 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.700 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.700 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.957 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.957 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:25.957 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.957 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.957 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.521 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.521 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:26.521 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.521 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.521 18:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.779 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.779 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:26.779 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.779 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.779 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.037 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.037 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:27.037 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.037 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.037 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.295 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.295 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:27.295 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.295 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.295 18:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.553 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.553 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:27.553 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.553 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.553 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.119 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.119 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:28.119 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.119 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.119 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.377 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:28.377 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.377 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.377 18:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.635 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.635 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:28.635 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.635 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.635 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.893 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.893 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:28.893 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.893 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.893 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.151 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.151 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:29.151 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.151 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.151 18:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.716 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.716 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:29.716 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.716 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.716 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.974 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.974 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:29.974 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.974 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.974 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.232 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.232 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:30.232 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.232 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.232 18:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.489 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.489 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:30.489 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.489 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.489 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.747 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.747 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:30.747 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.747 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.747 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.312 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.312 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:31.312 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.312 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.312 18:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.570 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.570 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:31.570 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.570 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.570 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.827 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:31.828 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.828 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.828 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.828 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.085 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.085 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2333548 00:17:32.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2333548) - No such process 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2333548 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.086 rmmod nvme_tcp 00:17:32.086 rmmod nvme_fabrics 00:17:32.086 rmmod nvme_keyring 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2333411 ']' 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2333411 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2333411 ']' 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2333411 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.086 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333411 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333411' 00:17:32.344 killing process with pid 2333411 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2333411 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2333411 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.344 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.345 18:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.883 00:17:34.883 real 0m15.439s 00:17:34.883 user 0m38.555s 00:17:34.883 sys 0m5.853s 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 ************************************ 00:17:34.883 END TEST nvmf_connect_stress 00:17:34.883 ************************************ 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 ************************************ 00:17:34.883 START TEST nvmf_fused_ordering 00:17:34.883 ************************************ 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:34.883 * Looking for test storage... 00:17:34.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:34.883 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.884 18:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:36.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:36.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.789 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:36.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:36.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:17:36.790 00:17:36.790 --- 10.0.0.2 ping statistics --- 00:17:36.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.790 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:36.790 00:17:36.790 --- 10.0.0.1 ping statistics --- 00:17:36.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.790 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2336689 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2336689 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2336689 ']' 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.790 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.790 [2024-07-23 18:05:44.445150] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:36.790 [2024-07-23 18:05:44.445252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.047 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.047 [2024-07-23 18:05:44.511435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.047 [2024-07-23 18:05:44.600359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.047 [2024-07-23 18:05:44.600419] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.047 [2024-07-23 18:05:44.600450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.047 [2024-07-23 18:05:44.600463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.047 [2024-07-23 18:05:44.600473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.047 [2024-07-23 18:05:44.600501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 [2024-07-23 18:05:44.736808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 [2024-07-23 18:05:44.752961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 NULL1 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.305 18:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:37.305 [2024-07-23 18:05:44.795635] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:37.305 [2024-07-23 18:05:44.795675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336719 ] 00:17:37.305 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.562 Attached to nqn.2016-06.io.spdk:cnode1 00:17:37.562 Namespace ID: 1 size: 1GB 00:17:37.562 fused_ordering(0) 00:17:37.562 fused_ordering(1) 00:17:37.562 fused_ordering(2) 00:17:37.562 fused_ordering(3) 00:17:37.562 fused_ordering(4) 00:17:37.562 fused_ordering(5) 00:17:37.562 fused_ordering(6) 00:17:37.562 fused_ordering(7) 00:17:37.562 fused_ordering(8) 00:17:37.562 fused_ordering(9) 00:17:37.562 fused_ordering(10) 00:17:37.562 fused_ordering(11) 00:17:37.562 fused_ordering(12) 00:17:37.562 fused_ordering(13) 00:17:37.562 fused_ordering(14) 00:17:37.562 fused_ordering(15) 00:17:37.562 fused_ordering(16) 00:17:37.562 fused_ordering(17) 00:17:37.562 fused_ordering(18) 00:17:37.562 fused_ordering(19) 00:17:37.562 fused_ordering(20) 00:17:37.562 fused_ordering(21) 00:17:37.562 fused_ordering(22) 00:17:37.562 fused_ordering(23) 00:17:37.562 fused_ordering(24) 00:17:37.562 fused_ordering(25) 00:17:37.562 fused_ordering(26) 00:17:37.562 fused_ordering(27) 00:17:37.562 fused_ordering(28) 00:17:37.562 fused_ordering(29) 00:17:37.562 fused_ordering(30) 00:17:37.562 fused_ordering(31) 00:17:37.562 fused_ordering(32) 00:17:37.562 fused_ordering(33) 00:17:37.562 fused_ordering(34) 00:17:37.562 fused_ordering(35) 00:17:37.562 fused_ordering(36) 00:17:37.562 fused_ordering(37) 00:17:37.562 fused_ordering(38) 00:17:37.562 fused_ordering(39) 00:17:37.562 fused_ordering(40) 00:17:37.562 fused_ordering(41) 00:17:37.562 fused_ordering(42) 00:17:37.562 fused_ordering(43) 00:17:37.562 fused_ordering(44) 00:17:37.562 fused_ordering(45) 00:17:37.562 fused_ordering(46) 00:17:37.562 fused_ordering(47) 00:17:37.562 fused_ordering(48) 00:17:37.562 fused_ordering(49) 00:17:37.562 fused_ordering(50) 00:17:37.562 fused_ordering(51) 00:17:37.562 fused_ordering(52) 00:17:37.562 fused_ordering(53) 00:17:37.562 fused_ordering(54) 00:17:37.562 fused_ordering(55) 00:17:37.562 fused_ordering(56) 00:17:37.562 fused_ordering(57) 00:17:37.562 fused_ordering(58) 00:17:37.562 fused_ordering(59) 00:17:37.562 fused_ordering(60) 00:17:37.562 fused_ordering(61) 00:17:37.562 fused_ordering(62) 00:17:37.562 fused_ordering(63) 00:17:37.562 fused_ordering(64) 00:17:37.562 fused_ordering(65) 00:17:37.562 fused_ordering(66) 00:17:37.562 fused_ordering(67) 00:17:37.562 fused_ordering(68) 00:17:37.562 fused_ordering(69) 00:17:37.562 fused_ordering(70) 00:17:37.562 fused_ordering(71) 00:17:37.562 fused_ordering(72) 00:17:37.562 fused_ordering(73) 00:17:37.562 fused_ordering(74) 00:17:37.562 fused_ordering(75) 00:17:37.562 fused_ordering(76) 00:17:37.562 fused_ordering(77) 00:17:37.562 fused_ordering(78) 00:17:37.562 fused_ordering(79) 00:17:37.562 fused_ordering(80) 00:17:37.562 fused_ordering(81) 00:17:37.562 fused_ordering(82) 00:17:37.562 fused_ordering(83) 00:17:37.562 fused_ordering(84) 00:17:37.562 fused_ordering(85) 00:17:37.562 fused_ordering(86) 00:17:37.562 fused_ordering(87) 00:17:37.562 fused_ordering(88) 00:17:37.562 fused_ordering(89) 00:17:37.562 fused_ordering(90) 00:17:37.562 fused_ordering(91) 00:17:37.562 fused_ordering(92) 00:17:37.563 fused_ordering(93) 00:17:37.563 fused_ordering(94) 00:17:37.563 fused_ordering(95) 00:17:37.563 fused_ordering(96) 00:17:37.563 fused_ordering(97) 00:17:37.563 fused_ordering(98) 00:17:37.563 fused_ordering(99) 00:17:37.563 fused_ordering(100) 00:17:37.563 fused_ordering(101) 00:17:37.563 fused_ordering(102) 00:17:37.563 fused_ordering(103) 00:17:37.563 fused_ordering(104) 00:17:37.563 fused_ordering(105) 00:17:37.563 fused_ordering(106) 00:17:37.563 fused_ordering(107) 00:17:37.563 fused_ordering(108) 00:17:37.563 fused_ordering(109) 00:17:37.563 fused_ordering(110) 00:17:37.563 fused_ordering(111) 00:17:37.563 fused_ordering(112) 00:17:37.563 fused_ordering(113) 00:17:37.563 fused_ordering(114) 00:17:37.563 fused_ordering(115) 00:17:37.563 fused_ordering(116) 00:17:37.563 fused_ordering(117) 00:17:37.563 fused_ordering(118) 00:17:37.563 fused_ordering(119) 00:17:37.563 fused_ordering(120) 00:17:37.563 fused_ordering(121) 00:17:37.563 fused_ordering(122) 00:17:37.563 fused_ordering(123) 00:17:37.563 fused_ordering(124) 00:17:37.563 fused_ordering(125) 00:17:37.563 fused_ordering(126) 00:17:37.563 fused_ordering(127) 00:17:37.563 fused_ordering(128) 00:17:37.563 fused_ordering(129) 00:17:37.563 fused_ordering(130) 00:17:37.563 fused_ordering(131) 00:17:37.563 fused_ordering(132) 00:17:37.563 fused_ordering(133) 00:17:37.563 fused_ordering(134) 00:17:37.563 fused_ordering(135) 00:17:37.563 fused_ordering(136) 00:17:37.563 fused_ordering(137) 00:17:37.563 fused_ordering(138) 00:17:37.563 fused_ordering(139) 00:17:37.563 fused_ordering(140) 00:17:37.563 fused_ordering(141) 00:17:37.563 fused_ordering(142) 00:17:37.563 fused_ordering(143) 00:17:37.563 fused_ordering(144) 00:17:37.563 fused_ordering(145) 00:17:37.563 fused_ordering(146) 00:17:37.563 fused_ordering(147) 00:17:37.563 fused_ordering(148) 00:17:37.563 fused_ordering(149) 00:17:37.563 fused_ordering(150) 00:17:37.563 fused_ordering(151) 00:17:37.563 fused_ordering(152) 00:17:37.563 fused_ordering(153) 00:17:37.563 fused_ordering(154) 00:17:37.563 fused_ordering(155) 00:17:37.563 fused_ordering(156) 00:17:37.563 fused_ordering(157) 00:17:37.563 fused_ordering(158) 00:17:37.563 fused_ordering(159) 00:17:37.563 fused_ordering(160) 00:17:37.563 fused_ordering(161) 00:17:37.563 fused_ordering(162) 00:17:37.563 fused_ordering(163) 00:17:37.563 fused_ordering(164) 00:17:37.563 fused_ordering(165) 00:17:37.563 fused_ordering(166) 00:17:37.563 fused_ordering(167) 00:17:37.563 fused_ordering(168) 00:17:37.563 fused_ordering(169) 00:17:37.563 fused_ordering(170) 00:17:37.563 fused_ordering(171) 00:17:37.563 fused_ordering(172) 00:17:37.563 fused_ordering(173) 00:17:37.563 fused_ordering(174) 00:17:37.563 fused_ordering(175) 00:17:37.563 fused_ordering(176) 00:17:37.563 fused_ordering(177) 00:17:37.563 fused_ordering(178) 00:17:37.563 fused_ordering(179) 00:17:37.563 fused_ordering(180) 00:17:37.563 fused_ordering(181) 00:17:37.563 fused_ordering(182) 00:17:37.563 fused_ordering(183) 00:17:37.563 fused_ordering(184) 00:17:37.563 fused_ordering(185) 00:17:37.563 fused_ordering(186) 00:17:37.563 fused_ordering(187) 00:17:37.563 fused_ordering(188) 00:17:37.563 fused_ordering(189) 00:17:37.563 fused_ordering(190) 00:17:37.563 fused_ordering(191) 00:17:37.563 fused_ordering(192) 00:17:37.563 fused_ordering(193) 00:17:37.563 fused_ordering(194) 00:17:37.563 fused_ordering(195) 00:17:37.563 fused_ordering(196) 00:17:37.563 fused_ordering(197) 00:17:37.563 fused_ordering(198) 00:17:37.563 fused_ordering(199) 00:17:37.563 fused_ordering(200) 00:17:37.563 fused_ordering(201) 00:17:37.563 fused_ordering(202) 00:17:37.563 fused_ordering(203) 00:17:37.563 fused_ordering(204) 00:17:37.563 fused_ordering(205) 00:17:38.128 fused_ordering(206) 00:17:38.128 fused_ordering(207) 00:17:38.128 fused_ordering(208) 00:17:38.128 fused_ordering(209) 00:17:38.128 fused_ordering(210) 00:17:38.128 fused_ordering(211) 00:17:38.128 fused_ordering(212) 00:17:38.128 fused_ordering(213) 00:17:38.128 fused_ordering(214) 00:17:38.128 fused_ordering(215) 00:17:38.128 fused_ordering(216) 00:17:38.128 fused_ordering(217) 00:17:38.128 fused_ordering(218) 00:17:38.128 fused_ordering(219) 00:17:38.128 fused_ordering(220) 00:17:38.128 fused_ordering(221) 00:17:38.128 fused_ordering(222) 00:17:38.128 fused_ordering(223) 00:17:38.128 fused_ordering(224) 00:17:38.128 fused_ordering(225) 00:17:38.128 fused_ordering(226) 00:17:38.128 fused_ordering(227) 00:17:38.128 fused_ordering(228) 00:17:38.128 fused_ordering(229) 00:17:38.128 fused_ordering(230) 00:17:38.128 fused_ordering(231) 00:17:38.128 fused_ordering(232) 00:17:38.128 fused_ordering(233) 00:17:38.128 fused_ordering(234) 00:17:38.128 fused_ordering(235) 00:17:38.128 fused_ordering(236) 00:17:38.128 fused_ordering(237) 00:17:38.128 fused_ordering(238) 00:17:38.128 fused_ordering(239) 00:17:38.128 fused_ordering(240) 00:17:38.128 fused_ordering(241) 00:17:38.128 fused_ordering(242) 00:17:38.128 fused_ordering(243) 00:17:38.128 fused_ordering(244) 00:17:38.128 fused_ordering(245) 00:17:38.128 fused_ordering(246) 00:17:38.128 fused_ordering(247) 00:17:38.128 fused_ordering(248) 00:17:38.128 fused_ordering(249) 00:17:38.128 fused_ordering(250) 00:17:38.128 fused_ordering(251) 00:17:38.128 fused_ordering(252) 00:17:38.128 fused_ordering(253) 00:17:38.128 fused_ordering(254) 00:17:38.128 fused_ordering(255) 00:17:38.128 fused_ordering(256) 00:17:38.128 fused_ordering(257) 00:17:38.128 fused_ordering(258) 00:17:38.128 fused_ordering(259) 00:17:38.128 fused_ordering(260) 00:17:38.128 fused_ordering(261) 00:17:38.128 fused_ordering(262) 00:17:38.128 fused_ordering(263) 00:17:38.128 fused_ordering(264) 00:17:38.128 fused_ordering(265) 00:17:38.128 fused_ordering(266) 00:17:38.128 fused_ordering(267) 00:17:38.128 fused_ordering(268) 00:17:38.128 fused_ordering(269) 00:17:38.128 fused_ordering(270) 00:17:38.128 fused_ordering(271) 00:17:38.128 fused_ordering(272) 00:17:38.128 fused_ordering(273) 00:17:38.128 fused_ordering(274) 00:17:38.128 fused_ordering(275) 00:17:38.128 fused_ordering(276) 00:17:38.128 fused_ordering(277) 00:17:38.128 fused_ordering(278) 00:17:38.128 fused_ordering(279) 00:17:38.128 fused_ordering(280) 00:17:38.128 fused_ordering(281) 00:17:38.128 fused_ordering(282) 00:17:38.128 fused_ordering(283) 00:17:38.128 fused_ordering(284) 00:17:38.128 fused_ordering(285) 00:17:38.128 fused_ordering(286) 00:17:38.128 fused_ordering(287) 00:17:38.128 fused_ordering(288) 00:17:38.128 fused_ordering(289) 00:17:38.128 fused_ordering(290) 00:17:38.128 fused_ordering(291) 00:17:38.128 fused_ordering(292) 00:17:38.128 fused_ordering(293) 00:17:38.128 fused_ordering(294) 00:17:38.128 fused_ordering(295) 00:17:38.128 fused_ordering(296) 00:17:38.128 fused_ordering(297) 00:17:38.128 fused_ordering(298) 00:17:38.128 fused_ordering(299) 00:17:38.128 fused_ordering(300) 00:17:38.128 fused_ordering(301) 00:17:38.128 fused_ordering(302) 00:17:38.128 fused_ordering(303) 00:17:38.128 fused_ordering(304) 00:17:38.128 fused_ordering(305) 00:17:38.128 fused_ordering(306) 00:17:38.128 fused_ordering(307) 00:17:38.128 fused_ordering(308) 00:17:38.128 fused_ordering(309) 00:17:38.128 fused_ordering(310) 00:17:38.128 fused_ordering(311) 00:17:38.128 fused_ordering(312) 00:17:38.128 fused_ordering(313) 00:17:38.128 fused_ordering(314) 00:17:38.128 fused_ordering(315) 00:17:38.128 fused_ordering(316) 00:17:38.128 fused_ordering(317) 00:17:38.128 fused_ordering(318) 00:17:38.128 fused_ordering(319) 00:17:38.128 fused_ordering(320) 00:17:38.128 fused_ordering(321) 00:17:38.128 fused_ordering(322) 00:17:38.128 fused_ordering(323) 00:17:38.128 fused_ordering(324) 00:17:38.128 fused_ordering(325) 00:17:38.128 fused_ordering(326) 00:17:38.128 fused_ordering(327) 00:17:38.128 fused_ordering(328) 00:17:38.128 fused_ordering(329) 00:17:38.128 fused_ordering(330) 00:17:38.128 fused_ordering(331) 00:17:38.128 fused_ordering(332) 00:17:38.128 fused_ordering(333) 00:17:38.128 fused_ordering(334) 00:17:38.128 fused_ordering(335) 00:17:38.128 fused_ordering(336) 00:17:38.128 fused_ordering(337) 00:17:38.128 fused_ordering(338) 00:17:38.128 fused_ordering(339) 00:17:38.128 fused_ordering(340) 00:17:38.128 fused_ordering(341) 00:17:38.128 fused_ordering(342) 00:17:38.128 fused_ordering(343) 00:17:38.128 fused_ordering(344) 00:17:38.128 fused_ordering(345) 00:17:38.128 fused_ordering(346) 00:17:38.128 fused_ordering(347) 00:17:38.128 fused_ordering(348) 00:17:38.128 fused_ordering(349) 00:17:38.128 fused_ordering(350) 00:17:38.128 fused_ordering(351) 00:17:38.128 fused_ordering(352) 00:17:38.128 fused_ordering(353) 00:17:38.128 fused_ordering(354) 00:17:38.128 fused_ordering(355) 00:17:38.128 fused_ordering(356) 00:17:38.128 fused_ordering(357) 00:17:38.128 fused_ordering(358) 00:17:38.128 fused_ordering(359) 00:17:38.128 fused_ordering(360) 00:17:38.128 fused_ordering(361) 00:17:38.128 fused_ordering(362) 00:17:38.128 fused_ordering(363) 00:17:38.128 fused_ordering(364) 00:17:38.128 fused_ordering(365) 00:17:38.128 fused_ordering(366) 00:17:38.128 fused_ordering(367) 00:17:38.128 fused_ordering(368) 00:17:38.128 fused_ordering(369) 00:17:38.128 fused_ordering(370) 00:17:38.128 fused_ordering(371) 00:17:38.128 fused_ordering(372) 00:17:38.128 fused_ordering(373) 00:17:38.128 fused_ordering(374) 00:17:38.128 fused_ordering(375) 00:17:38.128 fused_ordering(376) 00:17:38.128 fused_ordering(377) 00:17:38.128 fused_ordering(378) 00:17:38.128 fused_ordering(379) 00:17:38.128 fused_ordering(380) 00:17:38.128 fused_ordering(381) 00:17:38.128 fused_ordering(382) 00:17:38.128 fused_ordering(383) 00:17:38.128 fused_ordering(384) 00:17:38.128 fused_ordering(385) 00:17:38.128 fused_ordering(386) 00:17:38.128 fused_ordering(387) 00:17:38.128 fused_ordering(388) 00:17:38.128 fused_ordering(389) 00:17:38.128 fused_ordering(390) 00:17:38.128 fused_ordering(391) 00:17:38.128 fused_ordering(392) 00:17:38.128 fused_ordering(393) 00:17:38.128 fused_ordering(394) 00:17:38.128 fused_ordering(395) 00:17:38.128 fused_ordering(396) 00:17:38.128 fused_ordering(397) 00:17:38.128 fused_ordering(398) 00:17:38.128 fused_ordering(399) 00:17:38.128 fused_ordering(400) 00:17:38.128 fused_ordering(401) 00:17:38.128 fused_ordering(402) 00:17:38.128 fused_ordering(403) 00:17:38.128 fused_ordering(404) 00:17:38.128 fused_ordering(405) 00:17:38.128 fused_ordering(406) 00:17:38.128 fused_ordering(407) 00:17:38.128 fused_ordering(408) 00:17:38.128 fused_ordering(409) 00:17:38.128 fused_ordering(410) 00:17:38.386 fused_ordering(411) 00:17:38.386 fused_ordering(412) 00:17:38.386 fused_ordering(413) 00:17:38.386 fused_ordering(414) 00:17:38.386 fused_ordering(415) 00:17:38.386 fused_ordering(416) 00:17:38.386 fused_ordering(417) 00:17:38.386 fused_ordering(418) 00:17:38.386 fused_ordering(419) 00:17:38.386 fused_ordering(420) 00:17:38.386 fused_ordering(421) 00:17:38.386 fused_ordering(422) 00:17:38.386 fused_ordering(423) 00:17:38.386 fused_ordering(424) 00:17:38.386 fused_ordering(425) 00:17:38.386 fused_ordering(426) 00:17:38.386 fused_ordering(427) 00:17:38.386 fused_ordering(428) 00:17:38.386 fused_ordering(429) 00:17:38.386 fused_ordering(430) 00:17:38.386 fused_ordering(431) 00:17:38.386 fused_ordering(432) 00:17:38.386 fused_ordering(433) 00:17:38.386 fused_ordering(434) 00:17:38.386 fused_ordering(435) 00:17:38.386 fused_ordering(436) 00:17:38.386 fused_ordering(437) 00:17:38.386 fused_ordering(438) 00:17:38.386 fused_ordering(439) 00:17:38.386 fused_ordering(440) 00:17:38.386 fused_ordering(441) 00:17:38.386 fused_ordering(442) 00:17:38.386 fused_ordering(443) 00:17:38.386 fused_ordering(444) 00:17:38.386 fused_ordering(445) 00:17:38.386 fused_ordering(446) 00:17:38.386 fused_ordering(447) 00:17:38.386 fused_ordering(448) 00:17:38.386 fused_ordering(449) 00:17:38.386 fused_ordering(450) 00:17:38.386 fused_ordering(451) 00:17:38.386 fused_ordering(452) 00:17:38.386 fused_ordering(453) 00:17:38.386 fused_ordering(454) 00:17:38.386 fused_ordering(455) 00:17:38.386 fused_ordering(456) 00:17:38.386 fused_ordering(457) 00:17:38.386 fused_ordering(458) 00:17:38.386 fused_ordering(459) 00:17:38.386 fused_ordering(460) 00:17:38.386 fused_ordering(461) 00:17:38.386 fused_ordering(462) 00:17:38.386 fused_ordering(463) 00:17:38.386 fused_ordering(464) 00:17:38.386 fused_ordering(465) 00:17:38.386 fused_ordering(466) 00:17:38.386 fused_ordering(467) 00:17:38.386 fused_ordering(468) 00:17:38.386 fused_ordering(469) 00:17:38.386 fused_ordering(470) 00:17:38.386 fused_ordering(471) 00:17:38.386 fused_ordering(472) 00:17:38.386 fused_ordering(473) 00:17:38.386 fused_ordering(474) 00:17:38.386 fused_ordering(475) 00:17:38.386 fused_ordering(476) 00:17:38.386 fused_ordering(477) 00:17:38.386 fused_ordering(478) 00:17:38.386 fused_ordering(479) 00:17:38.386 fused_ordering(480) 00:17:38.386 fused_ordering(481) 00:17:38.386 fused_ordering(482) 00:17:38.386 fused_ordering(483) 00:17:38.386 fused_ordering(484) 00:17:38.386 fused_ordering(485) 00:17:38.386 fused_ordering(486) 00:17:38.386 fused_ordering(487) 00:17:38.386 fused_ordering(488) 00:17:38.386 fused_ordering(489) 00:17:38.386 fused_ordering(490) 00:17:38.386 fused_ordering(491) 00:17:38.386 fused_ordering(492) 00:17:38.387 fused_ordering(493) 00:17:38.387 fused_ordering(494) 00:17:38.387 fused_ordering(495) 00:17:38.387 fused_ordering(496) 00:17:38.387 fused_ordering(497) 00:17:38.387 fused_ordering(498) 00:17:38.387 fused_ordering(499) 00:17:38.387 fused_ordering(500) 00:17:38.387 fused_ordering(501) 00:17:38.387 fused_ordering(502) 00:17:38.387 fused_ordering(503) 00:17:38.387 fused_ordering(504) 00:17:38.387 fused_ordering(505) 00:17:38.387 fused_ordering(506) 00:17:38.387 fused_ordering(507) 00:17:38.387 fused_ordering(508) 00:17:38.387 fused_ordering(509) 00:17:38.387 fused_ordering(510) 00:17:38.387 fused_ordering(511) 00:17:38.387 fused_ordering(512) 00:17:38.387 fused_ordering(513) 00:17:38.387 fused_ordering(514) 00:17:38.387 fused_ordering(515) 00:17:38.387 fused_ordering(516) 00:17:38.387 fused_ordering(517) 00:17:38.387 fused_ordering(518) 00:17:38.387 fused_ordering(519) 00:17:38.387 fused_ordering(520) 00:17:38.387 fused_ordering(521) 00:17:38.387 fused_ordering(522) 00:17:38.387 fused_ordering(523) 00:17:38.387 fused_ordering(524) 00:17:38.387 fused_ordering(525) 00:17:38.387 fused_ordering(526) 00:17:38.387 fused_ordering(527) 00:17:38.387 fused_ordering(528) 00:17:38.387 fused_ordering(529) 00:17:38.387 fused_ordering(530) 00:17:38.387 fused_ordering(531) 00:17:38.387 fused_ordering(532) 00:17:38.387 fused_ordering(533) 00:17:38.387 fused_ordering(534) 00:17:38.387 fused_ordering(535) 00:17:38.387 fused_ordering(536) 00:17:38.387 fused_ordering(537) 00:17:38.387 fused_ordering(538) 00:17:38.387 fused_ordering(539) 00:17:38.387 fused_ordering(540) 00:17:38.387 fused_ordering(541) 00:17:38.387 fused_ordering(542) 00:17:38.387 fused_ordering(543) 00:17:38.387 fused_ordering(544) 00:17:38.387 fused_ordering(545) 00:17:38.387 fused_ordering(546) 00:17:38.387 fused_ordering(547) 00:17:38.387 fused_ordering(548) 00:17:38.387 fused_ordering(549) 00:17:38.387 fused_ordering(550) 00:17:38.387 fused_ordering(551) 00:17:38.387 fused_ordering(552) 00:17:38.387 fused_ordering(553) 00:17:38.387 fused_ordering(554) 00:17:38.387 fused_ordering(555) 00:17:38.387 fused_ordering(556) 00:17:38.387 fused_ordering(557) 00:17:38.387 fused_ordering(558) 00:17:38.387 fused_ordering(559) 00:17:38.387 fused_ordering(560) 00:17:38.387 fused_ordering(561) 00:17:38.387 fused_ordering(562) 00:17:38.387 fused_ordering(563) 00:17:38.387 fused_ordering(564) 00:17:38.387 fused_ordering(565) 00:17:38.387 fused_ordering(566) 00:17:38.387 fused_ordering(567) 00:17:38.387 fused_ordering(568) 00:17:38.387 fused_ordering(569) 00:17:38.387 fused_ordering(570) 00:17:38.387 fused_ordering(571) 00:17:38.387 fused_ordering(572) 00:17:38.387 fused_ordering(573) 00:17:38.387 fused_ordering(574) 00:17:38.387 fused_ordering(575) 00:17:38.387 fused_ordering(576) 00:17:38.387 fused_ordering(577) 00:17:38.387 fused_ordering(578) 00:17:38.387 fused_ordering(579) 00:17:38.387 fused_ordering(580) 00:17:38.387 fused_ordering(581) 00:17:38.387 fused_ordering(582) 00:17:38.387 fused_ordering(583) 00:17:38.387 fused_ordering(584) 00:17:38.387 fused_ordering(585) 00:17:38.387 fused_ordering(586) 00:17:38.387 fused_ordering(587) 00:17:38.387 fused_ordering(588) 00:17:38.387 fused_ordering(589) 00:17:38.387 fused_ordering(590) 00:17:38.387 fused_ordering(591) 00:17:38.387 fused_ordering(592) 00:17:38.387 fused_ordering(593) 00:17:38.387 fused_ordering(594) 00:17:38.387 fused_ordering(595) 00:17:38.387 fused_ordering(596) 00:17:38.387 fused_ordering(597) 00:17:38.387 fused_ordering(598) 00:17:38.387 fused_ordering(599) 00:17:38.387 fused_ordering(600) 00:17:38.387 fused_ordering(601) 00:17:38.387 fused_ordering(602) 00:17:38.387 fused_ordering(603) 00:17:38.387 fused_ordering(604) 00:17:38.387 fused_ordering(605) 00:17:38.387 fused_ordering(606) 00:17:38.387 fused_ordering(607) 00:17:38.387 fused_ordering(608) 00:17:38.387 fused_ordering(609) 00:17:38.387 fused_ordering(610) 00:17:38.387 fused_ordering(611) 00:17:38.387 fused_ordering(612) 00:17:38.387 fused_ordering(613) 00:17:38.387 fused_ordering(614) 00:17:38.387 fused_ordering(615) 00:17:38.952 fused_ordering(616) 00:17:38.952 fused_ordering(617) 00:17:38.952 fused_ordering(618) 00:17:38.952 fused_ordering(619) 00:17:38.952 fused_ordering(620) 00:17:38.952 fused_ordering(621) 00:17:38.952 fused_ordering(622) 00:17:38.952 fused_ordering(623) 00:17:38.952 fused_ordering(624) 00:17:38.952 fused_ordering(625) 00:17:38.952 fused_ordering(626) 00:17:38.952 fused_ordering(627) 00:17:38.952 fused_ordering(628) 00:17:38.952 fused_ordering(629) 00:17:38.952 fused_ordering(630) 00:17:38.952 fused_ordering(631) 00:17:38.952 fused_ordering(632) 00:17:38.952 fused_ordering(633) 00:17:38.952 fused_ordering(634) 00:17:38.952 fused_ordering(635) 00:17:38.952 fused_ordering(636) 00:17:38.952 fused_ordering(637) 00:17:38.952 fused_ordering(638) 00:17:38.952 fused_ordering(639) 00:17:38.952 fused_ordering(640) 00:17:38.952 fused_ordering(641) 00:17:38.952 fused_ordering(642) 00:17:38.952 fused_ordering(643) 00:17:38.952 fused_ordering(644) 00:17:38.952 fused_ordering(645) 00:17:38.952 fused_ordering(646) 00:17:38.952 fused_ordering(647) 00:17:38.952 fused_ordering(648) 00:17:38.952 fused_ordering(649) 00:17:38.952 fused_ordering(650) 00:17:38.952 fused_ordering(651) 00:17:38.952 fused_ordering(652) 00:17:38.952 fused_ordering(653) 00:17:38.952 fused_ordering(654) 00:17:38.952 fused_ordering(655) 00:17:38.952 fused_ordering(656) 00:17:38.952 fused_ordering(657) 00:17:38.952 fused_ordering(658) 00:17:38.952 fused_ordering(659) 00:17:38.952 fused_ordering(660) 00:17:38.952 fused_ordering(661) 00:17:38.952 fused_ordering(662) 00:17:38.952 fused_ordering(663) 00:17:38.952 fused_ordering(664) 00:17:38.952 fused_ordering(665) 00:17:38.952 fused_ordering(666) 00:17:38.952 fused_ordering(667) 00:17:38.952 fused_ordering(668) 00:17:38.952 fused_ordering(669) 00:17:38.952 fused_ordering(670) 00:17:38.952 fused_ordering(671) 00:17:38.952 fused_ordering(672) 00:17:38.952 fused_ordering(673) 00:17:38.952 fused_ordering(674) 00:17:38.952 fused_ordering(675) 00:17:38.952 fused_ordering(676) 00:17:38.952 fused_ordering(677) 00:17:38.952 fused_ordering(678) 00:17:38.952 fused_ordering(679) 00:17:38.952 fused_ordering(680) 00:17:38.952 fused_ordering(681) 00:17:38.952 fused_ordering(682) 00:17:38.952 fused_ordering(683) 00:17:38.952 fused_ordering(684) 00:17:38.952 fused_ordering(685) 00:17:38.952 fused_ordering(686) 00:17:38.952 fused_ordering(687) 00:17:38.952 fused_ordering(688) 00:17:38.952 fused_ordering(689) 00:17:38.952 fused_ordering(690) 00:17:38.952 fused_ordering(691) 00:17:38.952 fused_ordering(692) 00:17:38.952 fused_ordering(693) 00:17:38.952 fused_ordering(694) 00:17:38.952 fused_ordering(695) 00:17:38.952 fused_ordering(696) 00:17:38.952 fused_ordering(697) 00:17:38.952 fused_ordering(698) 00:17:38.952 fused_ordering(699) 00:17:38.952 fused_ordering(700) 00:17:38.952 fused_ordering(701) 00:17:38.952 fused_ordering(702) 00:17:38.952 fused_ordering(703) 00:17:38.952 fused_ordering(704) 00:17:38.952 fused_ordering(705) 00:17:38.952 fused_ordering(706) 00:17:38.952 fused_ordering(707) 00:17:38.952 fused_ordering(708) 00:17:38.952 fused_ordering(709) 00:17:38.952 fused_ordering(710) 00:17:38.952 fused_ordering(711) 00:17:38.952 fused_ordering(712) 00:17:38.952 fused_ordering(713) 00:17:38.952 fused_ordering(714) 00:17:38.952 fused_ordering(715) 00:17:38.952 fused_ordering(716) 00:17:38.952 fused_ordering(717) 00:17:38.952 fused_ordering(718) 00:17:38.952 fused_ordering(719) 00:17:38.952 fused_ordering(720) 00:17:38.952 fused_ordering(721) 00:17:38.952 fused_ordering(722) 00:17:38.952 fused_ordering(723) 00:17:38.952 fused_ordering(724) 00:17:38.952 fused_ordering(725) 00:17:38.952 fused_ordering(726) 00:17:38.952 fused_ordering(727) 00:17:38.952 fused_ordering(728) 00:17:38.952 fused_ordering(729) 00:17:38.952 fused_ordering(730) 00:17:38.952 fused_ordering(731) 00:17:38.952 fused_ordering(732) 00:17:38.952 fused_ordering(733) 00:17:38.952 fused_ordering(734) 00:17:38.952 fused_ordering(735) 00:17:38.952 fused_ordering(736) 00:17:38.952 fused_ordering(737) 00:17:38.952 fused_ordering(738) 00:17:38.952 fused_ordering(739) 00:17:38.952 fused_ordering(740) 00:17:38.952 fused_ordering(741) 00:17:38.952 fused_ordering(742) 00:17:38.952 fused_ordering(743) 00:17:38.952 fused_ordering(744) 00:17:38.952 fused_ordering(745) 00:17:38.952 fused_ordering(746) 00:17:38.952 fused_ordering(747) 00:17:38.952 fused_ordering(748) 00:17:38.952 fused_ordering(749) 00:17:38.952 fused_ordering(750) 00:17:38.952 fused_ordering(751) 00:17:38.952 fused_ordering(752) 00:17:38.952 fused_ordering(753) 00:17:38.952 fused_ordering(754) 00:17:38.952 fused_ordering(755) 00:17:38.952 fused_ordering(756) 00:17:38.952 fused_ordering(757) 00:17:38.952 fused_ordering(758) 00:17:38.952 fused_ordering(759) 00:17:38.952 fused_ordering(760) 00:17:38.952 fused_ordering(761) 00:17:38.952 fused_ordering(762) 00:17:38.952 fused_ordering(763) 00:17:38.952 fused_ordering(764) 00:17:38.952 fused_ordering(765) 00:17:38.953 fused_ordering(766) 00:17:38.953 fused_ordering(767) 00:17:38.953 fused_ordering(768) 00:17:38.953 fused_ordering(769) 00:17:38.953 fused_ordering(770) 00:17:38.953 fused_ordering(771) 00:17:38.953 fused_ordering(772) 00:17:38.953 fused_ordering(773) 00:17:38.953 fused_ordering(774) 00:17:38.953 fused_ordering(775) 00:17:38.953 fused_ordering(776) 00:17:38.953 fused_ordering(777) 00:17:38.953 fused_ordering(778) 00:17:38.953 fused_ordering(779) 00:17:38.953 fused_ordering(780) 00:17:38.953 fused_ordering(781) 00:17:38.953 fused_ordering(782) 00:17:38.953 fused_ordering(783) 00:17:38.953 fused_ordering(784) 00:17:38.953 fused_ordering(785) 00:17:38.953 fused_ordering(786) 00:17:38.953 fused_ordering(787) 00:17:38.953 fused_ordering(788) 00:17:38.953 fused_ordering(789) 00:17:38.953 fused_ordering(790) 00:17:38.953 fused_ordering(791) 00:17:38.953 fused_ordering(792) 00:17:38.953 fused_ordering(793) 00:17:38.953 fused_ordering(794) 00:17:38.953 fused_ordering(795) 00:17:38.953 fused_ordering(796) 00:17:38.953 fused_ordering(797) 00:17:38.953 fused_ordering(798) 00:17:38.953 fused_ordering(799) 00:17:38.953 fused_ordering(800) 00:17:38.953 fused_ordering(801) 00:17:38.953 fused_ordering(802) 00:17:38.953 fused_ordering(803) 00:17:38.953 fused_ordering(804) 00:17:38.953 fused_ordering(805) 00:17:38.953 fused_ordering(806) 00:17:38.953 fused_ordering(807) 00:17:38.953 fused_ordering(808) 00:17:38.953 fused_ordering(809) 00:17:38.953 fused_ordering(810) 00:17:38.953 fused_ordering(811) 00:17:38.953 fused_ordering(812) 00:17:38.953 fused_ordering(813) 00:17:38.953 fused_ordering(814) 00:17:38.953 fused_ordering(815) 00:17:38.953 fused_ordering(816) 00:17:38.953 fused_ordering(817) 00:17:38.953 fused_ordering(818) 00:17:38.953 fused_ordering(819) 00:17:38.953 fused_ordering(820) 00:17:39.886 fused_ordering(821) 00:17:39.886 fused_ordering(822) 00:17:39.886 fused_ordering(823) 00:17:39.886 fused_ordering(824) 00:17:39.886 fused_ordering(825) 00:17:39.886 fused_ordering(826) 00:17:39.886 fused_ordering(827) 00:17:39.886 fused_ordering(828) 00:17:39.887 fused_ordering(829) 00:17:39.887 fused_ordering(830) 00:17:39.887 fused_ordering(831) 00:17:39.887 fused_ordering(832) 00:17:39.887 fused_ordering(833) 00:17:39.887 fused_ordering(834) 00:17:39.887 fused_ordering(835) 00:17:39.887 fused_ordering(836) 00:17:39.887 fused_ordering(837) 00:17:39.887 fused_ordering(838) 00:17:39.887 fused_ordering(839) 00:17:39.887 fused_ordering(840) 00:17:39.887 fused_ordering(841) 00:17:39.887 fused_ordering(842) 00:17:39.887 fused_ordering(843) 00:17:39.887 fused_ordering(844) 00:17:39.887 fused_ordering(845) 00:17:39.887 fused_ordering(846) 00:17:39.887 fused_ordering(847) 00:17:39.887 fused_ordering(848) 00:17:39.887 fused_ordering(849) 00:17:39.887 fused_ordering(850) 00:17:39.887 fused_ordering(851) 00:17:39.887 fused_ordering(852) 00:17:39.887 fused_ordering(853) 00:17:39.887 fused_ordering(854) 00:17:39.887 fused_ordering(855) 00:17:39.887 fused_ordering(856) 00:17:39.887 fused_ordering(857) 00:17:39.887 fused_ordering(858) 00:17:39.887 fused_ordering(859) 00:17:39.887 fused_ordering(860) 00:17:39.887 fused_ordering(861) 00:17:39.887 fused_ordering(862) 00:17:39.887 fused_ordering(863) 00:17:39.887 fused_ordering(864) 00:17:39.887 fused_ordering(865) 00:17:39.887 fused_ordering(866) 00:17:39.887 fused_ordering(867) 00:17:39.887 fused_ordering(868) 00:17:39.887 fused_ordering(869) 00:17:39.887 fused_ordering(870) 00:17:39.887 fused_ordering(871) 00:17:39.887 fused_ordering(872) 00:17:39.887 fused_ordering(873) 00:17:39.887 fused_ordering(874) 00:17:39.887 fused_ordering(875) 00:17:39.887 fused_ordering(876) 00:17:39.887 fused_ordering(877) 00:17:39.887 fused_ordering(878) 00:17:39.887 fused_ordering(879) 00:17:39.887 fused_ordering(880) 00:17:39.887 fused_ordering(881) 00:17:39.887 fused_ordering(882) 00:17:39.887 fused_ordering(883) 00:17:39.887 fused_ordering(884) 00:17:39.887 fused_ordering(885) 00:17:39.887 fused_ordering(886) 00:17:39.887 fused_ordering(887) 00:17:39.887 fused_ordering(888) 00:17:39.887 fused_ordering(889) 00:17:39.887 fused_ordering(890) 00:17:39.887 fused_ordering(891) 00:17:39.887 fused_ordering(892) 00:17:39.887 fused_ordering(893) 00:17:39.887 fused_ordering(894) 00:17:39.887 fused_ordering(895) 00:17:39.887 fused_ordering(896) 00:17:39.887 fused_ordering(897) 00:17:39.887 fused_ordering(898) 00:17:39.887 fused_ordering(899) 00:17:39.887 fused_ordering(900) 00:17:39.887 fused_ordering(901) 00:17:39.887 fused_ordering(902) 00:17:39.887 fused_ordering(903) 00:17:39.887 fused_ordering(904) 00:17:39.887 fused_ordering(905) 00:17:39.887 fused_ordering(906) 00:17:39.887 fused_ordering(907) 00:17:39.887 fused_ordering(908) 00:17:39.887 fused_ordering(909) 00:17:39.887 fused_ordering(910) 00:17:39.887 fused_ordering(911) 00:17:39.887 fused_ordering(912) 00:17:39.887 fused_ordering(913) 00:17:39.887 fused_ordering(914) 00:17:39.887 fused_ordering(915) 00:17:39.887 fused_ordering(916) 00:17:39.887 fused_ordering(917) 00:17:39.887 fused_ordering(918) 00:17:39.887 fused_ordering(919) 00:17:39.887 fused_ordering(920) 00:17:39.887 fused_ordering(921) 00:17:39.887 fused_ordering(922) 00:17:39.887 fused_ordering(923) 00:17:39.887 fused_ordering(924) 00:17:39.887 fused_ordering(925) 00:17:39.887 fused_ordering(926) 00:17:39.887 fused_ordering(927) 00:17:39.887 fused_ordering(928) 00:17:39.887 fused_ordering(929) 00:17:39.887 fused_ordering(930) 00:17:39.887 fused_ordering(931) 00:17:39.887 fused_ordering(932) 00:17:39.887 fused_ordering(933) 00:17:39.887 fused_ordering(934) 00:17:39.887 fused_ordering(935) 00:17:39.887 fused_ordering(936) 00:17:39.887 fused_ordering(937) 00:17:39.887 fused_ordering(938) 00:17:39.887 fused_ordering(939) 00:17:39.887 fused_ordering(940) 00:17:39.887 fused_ordering(941) 00:17:39.887 fused_ordering(942) 00:17:39.887 fused_ordering(943) 00:17:39.887 fused_ordering(944) 00:17:39.887 fused_ordering(945) 00:17:39.887 fused_ordering(946) 00:17:39.887 fused_ordering(947) 00:17:39.887 fused_ordering(948) 00:17:39.887 fused_ordering(949) 00:17:39.887 fused_ordering(950) 00:17:39.887 fused_ordering(951) 00:17:39.887 fused_ordering(952) 00:17:39.887 fused_ordering(953) 00:17:39.887 fused_ordering(954) 00:17:39.887 fused_ordering(955) 00:17:39.887 fused_ordering(956) 00:17:39.887 fused_ordering(957) 00:17:39.887 fused_ordering(958) 00:17:39.887 fused_ordering(959) 00:17:39.887 fused_ordering(960) 00:17:39.887 fused_ordering(961) 00:17:39.887 fused_ordering(962) 00:17:39.887 fused_ordering(963) 00:17:39.887 fused_ordering(964) 00:17:39.887 fused_ordering(965) 00:17:39.887 fused_ordering(966) 00:17:39.887 fused_ordering(967) 00:17:39.887 fused_ordering(968) 00:17:39.887 fused_ordering(969) 00:17:39.887 fused_ordering(970) 00:17:39.887 fused_ordering(971) 00:17:39.887 fused_ordering(972) 00:17:39.887 fused_ordering(973) 00:17:39.887 fused_ordering(974) 00:17:39.887 fused_ordering(975) 00:17:39.887 fused_ordering(976) 00:17:39.887 fused_ordering(977) 00:17:39.887 fused_ordering(978) 00:17:39.887 fused_ordering(979) 00:17:39.887 fused_ordering(980) 00:17:39.887 fused_ordering(981) 00:17:39.887 fused_ordering(982) 00:17:39.887 fused_ordering(983) 00:17:39.887 fused_ordering(984) 00:17:39.887 fused_ordering(985) 00:17:39.887 fused_ordering(986) 00:17:39.887 fused_ordering(987) 00:17:39.887 fused_ordering(988) 00:17:39.887 fused_ordering(989) 00:17:39.887 fused_ordering(990) 00:17:39.887 fused_ordering(991) 00:17:39.887 fused_ordering(992) 00:17:39.887 fused_ordering(993) 00:17:39.887 fused_ordering(994) 00:17:39.887 fused_ordering(995) 00:17:39.887 fused_ordering(996) 00:17:39.887 fused_ordering(997) 00:17:39.887 fused_ordering(998) 00:17:39.887 fused_ordering(999) 00:17:39.887 fused_ordering(1000) 00:17:39.887 fused_ordering(1001) 00:17:39.887 fused_ordering(1002) 00:17:39.887 fused_ordering(1003) 00:17:39.887 fused_ordering(1004) 00:17:39.887 fused_ordering(1005) 00:17:39.887 fused_ordering(1006) 00:17:39.887 fused_ordering(1007) 00:17:39.887 fused_ordering(1008) 00:17:39.887 fused_ordering(1009) 00:17:39.887 fused_ordering(1010) 00:17:39.887 fused_ordering(1011) 00:17:39.887 fused_ordering(1012) 00:17:39.887 fused_ordering(1013) 00:17:39.887 fused_ordering(1014) 00:17:39.887 fused_ordering(1015) 00:17:39.887 fused_ordering(1016) 00:17:39.887 fused_ordering(1017) 00:17:39.887 fused_ordering(1018) 00:17:39.887 fused_ordering(1019) 00:17:39.887 fused_ordering(1020) 00:17:39.887 fused_ordering(1021) 00:17:39.887 fused_ordering(1022) 00:17:39.887 fused_ordering(1023) 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.887 rmmod nvme_tcp 00:17:39.887 rmmod nvme_fabrics 00:17:39.887 rmmod nvme_keyring 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2336689 ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2336689 ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2336689' 00:17:39.887 killing process with pid 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2336689 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.887 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.888 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.888 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.888 18:05:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.426 00:17:42.426 real 0m7.484s 00:17:42.426 user 0m5.074s 00:17:42.426 sys 0m3.151s 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:42.426 ************************************ 00:17:42.426 END TEST nvmf_fused_ordering 00:17:42.426 ************************************ 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.426 ************************************ 00:17:42.426 START TEST nvmf_ns_masking 00:17:42.426 ************************************ 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:42.426 * Looking for test storage... 00:17:42.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.426 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8b96ce09-215a-4a84-94b5-f7ec9a7d4d73 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=db76a8ff-a329-4e6f-9d90-ae89e469f13b 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=47c1481e-9c50-4dd7-ad21-172919dfe36a 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.427 18:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.357 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:44.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:44.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:44.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:44.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:44.358 00:17:44.358 --- 10.0.0.2 ping statistics --- 00:17:44.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.358 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:17:44.358 00:17:44.358 --- 10.0.0.1 ping statistics --- 00:17:44.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.358 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2338922 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2338922 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2338922 ']' 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.358 18:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 [2024-07-23 18:05:52.013201] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:44.358 [2024-07-23 18:05:52.013289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.616 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.616 [2024-07-23 18:05:52.079179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.616 [2024-07-23 18:05:52.167603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.616 [2024-07-23 18:05:52.167671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.616 [2024-07-23 18:05:52.167684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.616 [2024-07-23 18:05:52.167695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.616 [2024-07-23 18:05:52.167703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.616 [2024-07-23 18:05:52.167742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.616 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.616 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:44.616 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.616 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.616 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.874 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.874 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.874 [2024-07-23 18:05:52.517981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.132 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:45.132 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:45.132 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.389 Malloc1 00:17:45.389 18:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:45.647 Malloc2 00:17:45.647 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:45.907 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:46.165 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.165 [2024-07-23 18:05:53.810430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c1481e-9c50-4dd7-ad21-172919dfe36a -a 10.0.0.2 -s 4420 -i 4 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:46.422 18:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:48.317 18:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.574 [ 0]:0x1 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=904576047fb54e5b8d6ba839d6b43833 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 904576047fb54e5b8d6ba839d6b43833 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.574 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.832 [ 0]:0x1 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=904576047fb54e5b8d6ba839d6b43833 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 904576047fb54e5b8d6ba839d6b43833 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:48.832 [ 1]:0x2 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:48.832 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.089 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.347 18:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:49.604 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:49.604 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c1481e-9c50-4dd7-ad21-172919dfe36a -a 10.0.0.2 -s 4420 -i 4 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:49.861 18:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:51.755 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:52.013 [ 0]:0x2 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.013 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.271 [ 0]:0x1 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=904576047fb54e5b8d6ba839d6b43833 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 904576047fb54e5b8d6ba839d6b43833 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:52.271 [ 1]:0x2 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:52.271 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.529 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:52.529 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.529 18:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:52.787 [ 0]:0x2 00:17:52.787 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.788 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:53.045 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:53.045 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47c1481e-9c50-4dd7-ad21-172919dfe36a -a 10.0.0.2 -s 4420 -i 4 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:53.302 18:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:55.200 [ 0]:0x1 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=904576047fb54e5b8d6ba839d6b43833 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 904576047fb54e5b8d6ba839d6b43833 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:55.200 [ 1]:0x2 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.200 18:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:55.766 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:55.767 [ 0]:0x2 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:55.767 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:56.024 [2024-07-23 18:06:03.532043] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:56.024 request: 00:17:56.024 { 00:17:56.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.024 "nsid": 2, 00:17:56.024 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.024 "method": "nvmf_ns_remove_host", 00:17:56.024 "req_id": 1 00:17:56.024 } 00:17:56.024 Got JSON-RPC error response 00:17:56.024 response: 00:17:56.024 { 00:17:56.024 "code": -32602, 00:17:56.024 "message": "Invalid parameters" 00:17:56.024 } 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:56.024 [ 0]:0x2 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:56.024 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb23a226cc91454c981bcac23cdd02e0 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb23a226cc91454c981bcac23cdd02e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2340639 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2340639 /var/tmp/host.sock 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2340639 ']' 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.281 18:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:56.281 [2024-07-23 18:06:03.879865] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:17:56.281 [2024-07-23 18:06:03.879943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340639 ] 00:17:56.281 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.281 [2024-07-23 18:06:03.939939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.539 [2024-07-23 18:06:04.025688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.797 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.797 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:56.797 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:57.054 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:57.312 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8b96ce09-215a-4a84-94b5-f7ec9a7d4d73 00:17:57.312 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:57.312 18:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8B96CE09215A4A8494B5F7EC9A7D4D73 -i 00:17:57.569 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid db76a8ff-a329-4e6f-9d90-ae89e469f13b 00:17:57.569 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:57.569 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DB76A8FFA3294E6F9D90AE89E469F13B -i 00:17:57.827 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:58.085 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:58.342 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:58.342 18:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:58.907 nvme0n1 00:17:58.907 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:58.907 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:59.473 nvme1n2 00:17:59.473 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:59.473 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:59.473 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:59.473 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:59.473 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:59.730 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:59.730 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:59.730 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:59.730 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:59.989 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8b96ce09-215a-4a84-94b5-f7ec9a7d4d73 == \8\b\9\6\c\e\0\9\-\2\1\5\a\-\4\a\8\4\-\9\4\b\5\-\f\7\e\c\9\a\7\d\4\d\7\3 ]] 00:17:59.989 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:59.989 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:59.989 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ db76a8ff-a329-4e6f-9d90-ae89e469f13b == \d\b\7\6\a\8\f\f\-\a\3\2\9\-\4\e\6\f\-\9\d\9\0\-\a\e\8\9\e\4\6\9\f\1\3\b ]] 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2340639 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2340639 ']' 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2340639 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340639 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340639' 00:18:00.252 killing process with pid 2340639 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2340639 00:18:00.252 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2340639 00:18:00.560 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.818 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.819 rmmod nvme_tcp 00:18:00.819 rmmod nvme_fabrics 00:18:00.819 rmmod nvme_keyring 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2338922 ']' 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2338922 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2338922 ']' 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2338922 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338922 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338922' 00:18:00.819 killing process with pid 2338922 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2338922 00:18:00.819 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2338922 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.077 18:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.612 00:18:03.612 real 0m21.107s 00:18:03.612 user 0m27.491s 00:18:03.612 sys 0m4.202s 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.612 ************************************ 00:18:03.612 END TEST nvmf_ns_masking 00:18:03.612 ************************************ 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.612 ************************************ 00:18:03.612 START TEST nvmf_nvme_cli 00:18:03.612 ************************************ 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:03.612 * Looking for test storage... 00:18:03.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.612 18:06:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.512 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.513 18:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:05.513 00:18:05.513 --- 10.0.0.2 ping statistics --- 00:18:05.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.513 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:18:05.513 00:18:05.513 --- 10.0.0.1 ping statistics --- 00:18:05.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.513 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2343648 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2343648 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2343648 ']' 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.513 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.513 [2024-07-23 18:06:13.129072] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:18:05.513 [2024-07-23 18:06:13.129171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.513 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.771 [2024-07-23 18:06:13.196362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.771 [2024-07-23 18:06:13.287029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.771 [2024-07-23 18:06:13.287085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.771 [2024-07-23 18:06:13.287103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.771 [2024-07-23 18:06:13.287114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.771 [2024-07-23 18:06:13.287124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.771 [2024-07-23 18:06:13.287207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.771 [2024-07-23 18:06:13.287273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.771 [2024-07-23 18:06:13.287331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.771 [2024-07-23 18:06:13.287336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.771 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.771 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:18:05.771 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.771 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.771 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 [2024-07-23 18:06:13.441742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 Malloc0 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 Malloc1 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 [2024-07-23 18:06:13.526329] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:06.286 00:18:06.286 Discovery Log Number of Records 2, Generation counter 2 00:18:06.286 =====Discovery Log Entry 0====== 00:18:06.286 trtype: tcp 00:18:06.286 adrfam: ipv4 00:18:06.286 subtype: current discovery subsystem 00:18:06.286 treq: not required 00:18:06.286 portid: 0 00:18:06.286 trsvcid: 4420 00:18:06.286 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:06.286 traddr: 10.0.0.2 00:18:06.286 eflags: explicit discovery connections, duplicate discovery information 00:18:06.286 sectype: none 00:18:06.286 =====Discovery Log Entry 1====== 00:18:06.286 trtype: tcp 00:18:06.286 adrfam: ipv4 00:18:06.286 subtype: nvme subsystem 00:18:06.286 treq: not required 00:18:06.286 portid: 0 00:18:06.286 trsvcid: 4420 00:18:06.286 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:06.286 traddr: 10.0.0.2 00:18:06.286 eflags: none 00:18:06.286 sectype: none 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:06.286 18:06:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:06.850 18:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:08.745 /dev/nvme0n1 ]] 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:08.745 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.002 rmmod nvme_tcp 00:18:09.002 rmmod nvme_fabrics 00:18:09.002 rmmod nvme_keyring 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2343648 ']' 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2343648 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2343648 ']' 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2343648 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2343648 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2343648' 00:18:09.002 killing process with pid 2343648 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2343648 00:18:09.002 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2343648 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.260 18:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.793 00:18:11.793 real 0m8.148s 00:18:11.793 user 0m14.798s 00:18:11.793 sys 0m2.281s 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.793 ************************************ 00:18:11.793 END TEST nvmf_nvme_cli 00:18:11.793 ************************************ 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.793 ************************************ 00:18:11.793 START TEST nvmf_vfio_user 00:18:11.793 ************************************ 00:18:11.793 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:11.793 * Looking for test storage... 00:18:11.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2344440 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2344440' 00:18:11.793 Process pid: 2344440 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2344440 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2344440 ']' 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:11.793 [2024-07-23 18:06:19.123890] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:18:11.793 [2024-07-23 18:06:19.123961] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.793 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.793 [2024-07-23 18:06:19.185875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.793 [2024-07-23 18:06:19.270228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.793 [2024-07-23 18:06:19.270284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.793 [2024-07-23 18:06:19.270306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.793 [2024-07-23 18:06:19.270324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.793 [2024-07-23 18:06:19.270349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.793 [2024-07-23 18:06:19.270398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.793 [2024-07-23 18:06:19.270456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.793 [2024-07-23 18:06:19.270523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.793 [2024-07-23 18:06:19.270525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.793 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:18:11.794 18:06:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:13.165 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:13.423 Malloc1 00:18:13.423 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:13.681 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:13.939 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:14.197 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:14.197 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:14.197 18:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:14.455 Malloc2 00:18:14.455 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:14.712 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:14.969 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:15.228 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:15.228 [2024-07-23 18:06:22.798219] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:18:15.228 [2024-07-23 18:06:22.798263] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344972 ] 00:18:15.228 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.228 [2024-07-23 18:06:22.832456] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:15.228 [2024-07-23 18:06:22.841805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.228 [2024-07-23 18:06:22.841839] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f267d8aa000 00:18:15.228 [2024-07-23 18:06:22.842805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.843792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.844797] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.845802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.846809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.847813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.848819] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.849822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.229 [2024-07-23 18:06:22.850832] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.229 [2024-07-23 18:06:22.850859] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f267c65e000 00:18:15.229 [2024-07-23 18:06:22.851978] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:15.229 [2024-07-23 18:06:22.871607] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:15.229 [2024-07-23 18:06:22.871668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:15.229 [2024-07-23 18:06:22.873959] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:15.229 [2024-07-23 18:06:22.874012] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:15.229 [2024-07-23 18:06:22.874108] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:15.229 [2024-07-23 18:06:22.874138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:15.229 [2024-07-23 18:06:22.874148] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:15.229 [2024-07-23 18:06:22.874955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:15.229 [2024-07-23 18:06:22.874980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:15.229 [2024-07-23 18:06:22.874993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:15.229 [2024-07-23 18:06:22.875963] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:15.229 [2024-07-23 18:06:22.875983] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:15.229 [2024-07-23 18:06:22.875997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.876970] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:15.229 [2024-07-23 18:06:22.876989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.877972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:15.229 [2024-07-23 18:06:22.877990] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:15.229 [2024-07-23 18:06:22.877999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.878011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.878120] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:15.229 [2024-07-23 18:06:22.878128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.878137] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:15.229 [2024-07-23 18:06:22.878981] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:15.229 [2024-07-23 18:06:22.879986] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:15.229 [2024-07-23 18:06:22.880990] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:15.229 [2024-07-23 18:06:22.881986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:15.229 [2024-07-23 18:06:22.882116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:15.229 [2024-07-23 18:06:22.883003] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:15.229 [2024-07-23 18:06:22.883024] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:15.229 [2024-07-23 18:06:22.883034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:15.229 [2024-07-23 18:06:22.883090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883127] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.229 [2024-07-23 18:06:22.883140] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.229 [2024-07-23 18:06:22.883146] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.229 [2024-07-23 18:06:22.883167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.229 [2024-07-23 18:06:22.883237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:15.229 [2024-07-23 18:06:22.883257] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:15.229 [2024-07-23 18:06:22.883266] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:15.229 [2024-07-23 18:06:22.883274] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:15.229 [2024-07-23 18:06:22.883297] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:15.229 [2024-07-23 18:06:22.883314] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:15.229 [2024-07-23 18:06:22.883332] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:15.229 [2024-07-23 18:06:22.883340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:15.229 [2024-07-23 18:06:22.883410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:15.229 [2024-07-23 18:06:22.883432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.229 [2024-07-23 18:06:22.883446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.229 [2024-07-23 18:06:22.883459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.229 [2024-07-23 18:06:22.883471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.229 [2024-07-23 18:06:22.883480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:15.229 [2024-07-23 18:06:22.883545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:15.229 [2024-07-23 18:06:22.883566] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:15.229 [2024-07-23 18:06:22.883580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:15.229 [2024-07-23 18:06:22.883644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:15.229 [2024-07-23 18:06:22.883726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:15.229 [2024-07-23 18:06:22.883761] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:15.229 [2024-07-23 18:06:22.883770] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:15.230 [2024-07-23 18:06:22.883776] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.883785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.883816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.883834] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:15.230 [2024-07-23 18:06:22.883877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.883896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.883909] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.230 [2024-07-23 18:06:22.883918] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.230 [2024-07-23 18:06:22.883924] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.883933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.883959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.883982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.883998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884010] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.230 [2024-07-23 18:06:22.884018] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.230 [2024-07-23 18:06:22.884024] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.884033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884134] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:15.230 [2024-07-23 18:06:22.884141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:15.230 [2024-07-23 18:06:22.884150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:15.230 [2024-07-23 18:06:22.884192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884347] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:15.230 [2024-07-23 18:06:22.884359] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:15.230 [2024-07-23 18:06:22.884366] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:15.230 [2024-07-23 18:06:22.884372] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:15.230 [2024-07-23 18:06:22.884378] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:15.230 [2024-07-23 18:06:22.884387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:15.230 [2024-07-23 18:06:22.884399] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:15.230 [2024-07-23 18:06:22.884408] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:15.230 [2024-07-23 18:06:22.884414] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.884423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884434] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:15.230 [2024-07-23 18:06:22.884443] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.230 [2024-07-23 18:06:22.884449] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.884458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884471] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:15.230 [2024-07-23 18:06:22.884479] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:15.230 [2024-07-23 18:06:22.884485] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.230 [2024-07-23 18:06:22.884498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:15.230 [2024-07-23 18:06:22.884511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:15.230 [2024-07-23 18:06:22.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:15.230 ===================================================== 00:18:15.230 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:15.230 ===================================================== 00:18:15.230 Controller Capabilities/Features 00:18:15.230 ================================ 00:18:15.230 Vendor ID: 4e58 00:18:15.230 Subsystem Vendor ID: 4e58 00:18:15.230 Serial Number: SPDK1 00:18:15.230 Model Number: SPDK bdev Controller 00:18:15.230 Firmware Version: 24.09 00:18:15.230 Recommended Arb Burst: 6 00:18:15.230 IEEE OUI Identifier: 8d 6b 50 00:18:15.230 Multi-path I/O 00:18:15.230 May have multiple subsystem ports: Yes 00:18:15.230 May have multiple controllers: Yes 00:18:15.230 Associated with SR-IOV VF: No 00:18:15.230 Max Data Transfer Size: 131072 00:18:15.230 Max Number of Namespaces: 32 00:18:15.230 Max Number of I/O Queues: 127 00:18:15.230 NVMe Specification Version (VS): 1.3 00:18:15.230 NVMe Specification Version (Identify): 1.3 00:18:15.230 Maximum Queue Entries: 256 00:18:15.230 Contiguous Queues Required: Yes 00:18:15.230 Arbitration Mechanisms Supported 00:18:15.230 Weighted Round Robin: Not Supported 00:18:15.230 Vendor Specific: Not Supported 00:18:15.230 Reset Timeout: 15000 ms 00:18:15.230 Doorbell Stride: 4 bytes 00:18:15.230 NVM Subsystem Reset: Not Supported 00:18:15.230 Command Sets Supported 00:18:15.230 NVM Command Set: Supported 00:18:15.230 Boot Partition: Not Supported 00:18:15.230 Memory Page Size Minimum: 4096 bytes 00:18:15.230 Memory Page Size Maximum: 4096 bytes 00:18:15.230 Persistent Memory Region: Not Supported 00:18:15.230 Optional Asynchronous Events Supported 00:18:15.230 Namespace Attribute Notices: Supported 00:18:15.230 Firmware Activation Notices: Not Supported 00:18:15.230 ANA Change Notices: Not Supported 00:18:15.230 PLE Aggregate Log Change Notices: Not Supported 00:18:15.230 LBA Status Info Alert Notices: Not Supported 00:18:15.230 EGE Aggregate Log Change Notices: Not Supported 00:18:15.230 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.230 Zone Descriptor Change Notices: Not Supported 00:18:15.230 Discovery Log Change Notices: Not Supported 00:18:15.230 Controller Attributes 00:18:15.230 128-bit Host Identifier: Supported 00:18:15.230 Non-Operational Permissive Mode: Not Supported 00:18:15.230 NVM Sets: Not Supported 00:18:15.230 Read Recovery Levels: Not Supported 00:18:15.230 Endurance Groups: Not Supported 00:18:15.230 Predictable Latency Mode: Not Supported 00:18:15.230 Traffic Based Keep ALive: Not Supported 00:18:15.230 Namespace Granularity: Not Supported 00:18:15.230 SQ Associations: Not Supported 00:18:15.231 UUID List: Not Supported 00:18:15.231 Multi-Domain Subsystem: Not Supported 00:18:15.231 Fixed Capacity Management: Not Supported 00:18:15.231 Variable Capacity Management: Not Supported 00:18:15.231 Delete Endurance Group: Not Supported 00:18:15.231 Delete NVM Set: Not Supported 00:18:15.231 Extended LBA Formats Supported: Not Supported 00:18:15.231 Flexible Data Placement Supported: Not Supported 00:18:15.231 00:18:15.231 Controller Memory Buffer Support 00:18:15.231 ================================ 00:18:15.231 Supported: No 00:18:15.231 00:18:15.231 Persistent Memory Region Support 00:18:15.231 ================================ 00:18:15.231 Supported: No 00:18:15.231 00:18:15.231 Admin Command Set Attributes 00:18:15.231 ============================ 00:18:15.231 Security Send/Receive: Not Supported 00:18:15.231 Format NVM: Not Supported 00:18:15.231 Firmware Activate/Download: Not Supported 00:18:15.231 Namespace Management: Not Supported 00:18:15.231 Device Self-Test: Not Supported 00:18:15.231 Directives: Not Supported 00:18:15.231 NVMe-MI: Not Supported 00:18:15.231 Virtualization Management: Not Supported 00:18:15.231 Doorbell Buffer Config: Not Supported 00:18:15.231 Get LBA Status Capability: Not Supported 00:18:15.231 Command & Feature Lockdown Capability: Not Supported 00:18:15.231 Abort Command Limit: 4 00:18:15.231 Async Event Request Limit: 4 00:18:15.231 Number of Firmware Slots: N/A 00:18:15.231 Firmware Slot 1 Read-Only: N/A 00:18:15.231 Firmware Activation Without Reset: N/A 00:18:15.231 Multiple Update Detection Support: N/A 00:18:15.231 Firmware Update Granularity: No Information Provided 00:18:15.231 Per-Namespace SMART Log: No 00:18:15.231 Asymmetric Namespace Access Log Page: Not Supported 00:18:15.231 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:15.231 Command Effects Log Page: Supported 00:18:15.231 Get Log Page Extended Data: Supported 00:18:15.231 Telemetry Log Pages: Not Supported 00:18:15.231 Persistent Event Log Pages: Not Supported 00:18:15.231 Supported Log Pages Log Page: May Support 00:18:15.231 Commands Supported & Effects Log Page: Not Supported 00:18:15.231 Feature Identifiers & Effects Log Page:May Support 00:18:15.231 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.231 Data Area 4 for Telemetry Log: Not Supported 00:18:15.231 Error Log Page Entries Supported: 128 00:18:15.231 Keep Alive: Supported 00:18:15.231 Keep Alive Granularity: 10000 ms 00:18:15.231 00:18:15.231 NVM Command Set Attributes 00:18:15.231 ========================== 00:18:15.231 Submission Queue Entry Size 00:18:15.231 Max: 64 00:18:15.231 Min: 64 00:18:15.231 Completion Queue Entry Size 00:18:15.231 Max: 16 00:18:15.231 Min: 16 00:18:15.231 Number of Namespaces: 32 00:18:15.231 Compare Command: Supported 00:18:15.231 Write Uncorrectable Command: Not Supported 00:18:15.231 Dataset Management Command: Supported 00:18:15.231 Write Zeroes Command: Supported 00:18:15.231 Set Features Save Field: Not Supported 00:18:15.231 Reservations: Not Supported 00:18:15.231 Timestamp: Not Supported 00:18:15.231 Copy: Supported 00:18:15.231 Volatile Write Cache: Present 00:18:15.231 Atomic Write Unit (Normal): 1 00:18:15.231 Atomic Write Unit (PFail): 1 00:18:15.231 Atomic Compare & Write Unit: 1 00:18:15.231 Fused Compare & Write: Supported 00:18:15.231 Scatter-Gather List 00:18:15.231 SGL Command Set: Supported (Dword aligned) 00:18:15.231 SGL Keyed: Not Supported 00:18:15.231 SGL Bit Bucket Descriptor: Not Supported 00:18:15.231 SGL Metadata Pointer: Not Supported 00:18:15.231 Oversized SGL: Not Supported 00:18:15.231 SGL Metadata Address: Not Supported 00:18:15.231 SGL Offset: Not Supported 00:18:15.231 Transport SGL Data Block: Not Supported 00:18:15.231 Replay Protected Memory Block: Not Supported 00:18:15.231 00:18:15.231 Firmware Slot Information 00:18:15.231 ========================= 00:18:15.231 Active slot: 1 00:18:15.231 Slot 1 Firmware Revision: 24.09 00:18:15.231 00:18:15.231 00:18:15.231 Commands Supported and Effects 00:18:15.231 ============================== 00:18:15.231 Admin Commands 00:18:15.231 -------------- 00:18:15.231 Get Log Page (02h): Supported 00:18:15.231 Identify (06h): Supported 00:18:15.231 Abort (08h): Supported 00:18:15.231 Set Features (09h): Supported 00:18:15.231 Get Features (0Ah): Supported 00:18:15.231 Asynchronous Event Request (0Ch): Supported 00:18:15.231 Keep Alive (18h): Supported 00:18:15.231 I/O Commands 00:18:15.231 ------------ 00:18:15.231 Flush (00h): Supported LBA-Change 00:18:15.231 Write (01h): Supported LBA-Change 00:18:15.231 Read (02h): Supported 00:18:15.231 Compare (05h): Supported 00:18:15.231 Write Zeroes (08h): Supported LBA-Change 00:18:15.231 Dataset Management (09h): Supported LBA-Change 00:18:15.231 Copy (19h): Supported LBA-Change 00:18:15.231 00:18:15.231 Error Log 00:18:15.231 ========= 00:18:15.231 00:18:15.231 Arbitration 00:18:15.231 =========== 00:18:15.231 Arbitration Burst: 1 00:18:15.231 00:18:15.231 Power Management 00:18:15.231 ================ 00:18:15.231 Number of Power States: 1 00:18:15.231 Current Power State: Power State #0 00:18:15.231 Power State #0: 00:18:15.231 Max Power: 0.00 W 00:18:15.231 Non-Operational State: Operational 00:18:15.231 Entry Latency: Not Reported 00:18:15.231 Exit Latency: Not Reported 00:18:15.231 Relative Read Throughput: 0 00:18:15.231 Relative Read Latency: 0 00:18:15.231 Relative Write Throughput: 0 00:18:15.231 Relative Write Latency: 0 00:18:15.231 Idle Power: Not Reported 00:18:15.231 Active Power: Not Reported 00:18:15.231 Non-Operational Permissive Mode: Not Supported 00:18:15.231 00:18:15.231 Health Information 00:18:15.231 ================== 00:18:15.231 Critical Warnings: 00:18:15.231 Available Spare Space: OK 00:18:15.231 Temperature: OK 00:18:15.231 Device Reliability: OK 00:18:15.231 Read Only: No 00:18:15.231 Volatile Memory Backup: OK 00:18:15.231 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:15.231 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:15.231 Available Spare: 0% 00:18:15.231 Available Sp[2024-07-23 18:06:22.884735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:15.231 [2024-07-23 18:06:22.884753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:15.231 [2024-07-23 18:06:22.884806] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:15.231 [2024-07-23 18:06:22.884835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.231 [2024-07-23 18:06:22.884857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.231 [2024-07-23 18:06:22.884875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.231 [2024-07-23 18:06:22.884894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.231 [2024-07-23 18:06:22.885017] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:15.231 [2024-07-23 18:06:22.885039] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:15.231 [2024-07-23 18:06:22.886012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:15.231 [2024-07-23 18:06:22.886101] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:15.231 [2024-07-23 18:06:22.886117] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:15.489 [2024-07-23 18:06:22.887028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:15.489 [2024-07-23 18:06:22.887061] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:15.489 [2024-07-23 18:06:22.887144] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:15.489 [2024-07-23 18:06:22.892342] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:15.489 are Threshold: 0% 00:18:15.489 Life Percentage Used: 0% 00:18:15.489 Data Units Read: 0 00:18:15.489 Data Units Written: 0 00:18:15.489 Host Read Commands: 0 00:18:15.489 Host Write Commands: 0 00:18:15.489 Controller Busy Time: 0 minutes 00:18:15.489 Power Cycles: 0 00:18:15.489 Power On Hours: 0 hours 00:18:15.489 Unsafe Shutdowns: 0 00:18:15.489 Unrecoverable Media Errors: 0 00:18:15.489 Lifetime Error Log Entries: 0 00:18:15.489 Warning Temperature Time: 0 minutes 00:18:15.489 Critical Temperature Time: 0 minutes 00:18:15.489 00:18:15.489 Number of Queues 00:18:15.489 ================ 00:18:15.489 Number of I/O Submission Queues: 127 00:18:15.489 Number of I/O Completion Queues: 127 00:18:15.489 00:18:15.489 Active Namespaces 00:18:15.489 ================= 00:18:15.489 Namespace ID:1 00:18:15.489 Error Recovery Timeout: Unlimited 00:18:15.489 Command Set Identifier: NVM (00h) 00:18:15.489 Deallocate: Supported 00:18:15.489 Deallocated/Unwritten Error: Not Supported 00:18:15.489 Deallocated Read Value: Unknown 00:18:15.489 Deallocate in Write Zeroes: Not Supported 00:18:15.489 Deallocated Guard Field: 0xFFFF 00:18:15.489 Flush: Supported 00:18:15.489 Reservation: Supported 00:18:15.489 Namespace Sharing Capabilities: Multiple Controllers 00:18:15.489 Size (in LBAs): 131072 (0GiB) 00:18:15.489 Capacity (in LBAs): 131072 (0GiB) 00:18:15.489 Utilization (in LBAs): 131072 (0GiB) 00:18:15.489 NGUID: 17C6F7DFD1E04290919C62AC064F72B1 00:18:15.489 UUID: 17c6f7df-d1e0-4290-919c-62ac064f72b1 00:18:15.489 Thin Provisioning: Not Supported 00:18:15.489 Per-NS Atomic Units: Yes 00:18:15.489 Atomic Boundary Size (Normal): 0 00:18:15.489 Atomic Boundary Size (PFail): 0 00:18:15.489 Atomic Boundary Offset: 0 00:18:15.489 Maximum Single Source Range Length: 65535 00:18:15.489 Maximum Copy Length: 65535 00:18:15.489 Maximum Source Range Count: 1 00:18:15.489 NGUID/EUI64 Never Reused: No 00:18:15.489 Namespace Write Protected: No 00:18:15.489 Number of LBA Formats: 1 00:18:15.489 Current LBA Format: LBA Format #00 00:18:15.489 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:15.489 00:18:15.489 18:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:15.489 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.489 [2024-07-23 18:06:23.119163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.752 Initializing NVMe Controllers 00:18:20.752 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:20.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:20.752 Initialization complete. Launching workers. 00:18:20.752 ======================================================== 00:18:20.752 Latency(us) 00:18:20.752 Device Information : IOPS MiB/s Average min max 00:18:20.752 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34883.20 136.26 3670.00 1178.34 9855.81 00:18:20.752 ======================================================== 00:18:20.752 Total : 34883.20 136.26 3670.00 1178.34 9855.81 00:18:20.752 00:18:20.752 [2024-07-23 18:06:28.141117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.752 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:20.752 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.752 [2024-07-23 18:06:28.385259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.053 Initializing NVMe Controllers 00:18:26.053 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:26.053 Initialization complete. Launching workers. 00:18:26.053 ======================================================== 00:18:26.053 Latency(us) 00:18:26.053 Device Information : IOPS MiB/s Average min max 00:18:26.053 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.80 62.40 8020.99 5773.59 15968.49 00:18:26.053 ======================================================== 00:18:26.053 Total : 15974.80 62.40 8020.99 5773.59 15968.49 00:18:26.053 00:18:26.053 [2024-07-23 18:06:33.417960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.053 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:26.053 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.053 [2024-07-23 18:06:33.633033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.314 [2024-07-23 18:06:38.713739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.314 Initializing NVMe Controllers 00:18:31.314 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.314 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:31.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:31.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:31.314 Initialization complete. Launching workers. 00:18:31.314 Starting thread on core 2 00:18:31.314 Starting thread on core 3 00:18:31.314 Starting thread on core 1 00:18:31.314 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:31.314 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.572 [2024-07-23 18:06:39.017826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.853 [2024-07-23 18:06:42.080287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.853 Initializing NVMe Controllers 00:18:34.853 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.853 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:34.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:34.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:34.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:34.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:34.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:34.853 Initialization complete. Launching workers. 00:18:34.853 Starting thread on core 1 with urgent priority queue 00:18:34.853 Starting thread on core 2 with urgent priority queue 00:18:34.853 Starting thread on core 3 with urgent priority queue 00:18:34.853 Starting thread on core 0 with urgent priority queue 00:18:34.853 SPDK bdev Controller (SPDK1 ) core 0: 5689.33 IO/s 17.58 secs/100000 ios 00:18:34.853 SPDK bdev Controller (SPDK1 ) core 1: 5097.33 IO/s 19.62 secs/100000 ios 00:18:34.853 SPDK bdev Controller (SPDK1 ) core 2: 5902.00 IO/s 16.94 secs/100000 ios 00:18:34.853 SPDK bdev Controller (SPDK1 ) core 3: 5765.00 IO/s 17.35 secs/100000 ios 00:18:34.853 ======================================================== 00:18:34.853 00:18:34.853 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.853 [2024-07-23 18:06:42.380904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.853 Initializing NVMe Controllers 00:18:34.853 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.853 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.853 Namespace ID: 1 size: 0GB 00:18:34.853 Initialization complete. 00:18:34.853 INFO: using host memory buffer for IO 00:18:34.853 Hello world! 00:18:34.853 [2024-07-23 18:06:42.414486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.853 18:06:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.111 [2024-07-23 18:06:42.699777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.481 Initializing NVMe Controllers 00:18:36.481 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.481 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.481 Initialization complete. Launching workers. 00:18:36.481 submit (in ns) avg, min, max = 8093.4, 3515.6, 5014261.1 00:18:36.481 complete (in ns) avg, min, max = 26900.0, 2064.4, 4998602.2 00:18:36.481 00:18:36.481 Submit histogram 00:18:36.481 ================ 00:18:36.481 Range in us Cumulative Count 00:18:36.481 3.508 - 3.532: 0.2703% ( 35) 00:18:36.481 3.532 - 3.556: 1.0504% ( 101) 00:18:36.481 3.556 - 3.579: 3.0120% ( 254) 00:18:36.481 3.579 - 3.603: 7.3679% ( 564) 00:18:36.481 3.603 - 3.627: 14.3729% ( 907) 00:18:36.481 3.627 - 3.650: 22.6367% ( 1070) 00:18:36.481 3.650 - 3.674: 31.3639% ( 1130) 00:18:36.481 3.674 - 3.698: 39.3188% ( 1030) 00:18:36.481 3.698 - 3.721: 47.1579% ( 1015) 00:18:36.481 3.721 - 3.745: 52.7726% ( 727) 00:18:36.481 3.745 - 3.769: 57.2135% ( 575) 00:18:36.481 3.769 - 3.793: 60.6503% ( 445) 00:18:36.482 3.793 - 3.816: 63.8477% ( 414) 00:18:36.482 3.816 - 3.840: 67.4853% ( 471) 00:18:36.482 3.840 - 3.864: 71.3701% ( 503) 00:18:36.482 3.864 - 3.887: 75.4171% ( 524) 00:18:36.482 3.887 - 3.911: 79.2246% ( 493) 00:18:36.482 3.911 - 3.935: 82.5456% ( 430) 00:18:36.482 3.935 - 3.959: 85.0788% ( 328) 00:18:36.482 3.959 - 3.982: 87.0945% ( 261) 00:18:36.482 3.982 - 4.006: 88.5774% ( 192) 00:18:36.482 4.006 - 4.030: 89.6355% ( 137) 00:18:36.482 4.030 - 4.053: 90.7862% ( 149) 00:18:36.482 4.053 - 4.077: 91.7362% ( 123) 00:18:36.482 4.077 - 4.101: 92.6707% ( 121) 00:18:36.482 4.101 - 4.124: 93.4121% ( 96) 00:18:36.482 4.124 - 4.148: 94.0377% ( 81) 00:18:36.482 4.148 - 4.172: 94.4934% ( 59) 00:18:36.482 4.172 - 4.196: 94.9722% ( 62) 00:18:36.482 4.196 - 4.219: 95.3352% ( 47) 00:18:36.482 4.219 - 4.243: 95.5514% ( 28) 00:18:36.482 4.243 - 4.267: 95.7368% ( 24) 00:18:36.482 4.267 - 4.290: 95.8449% ( 14) 00:18:36.482 4.290 - 4.314: 96.0303% ( 24) 00:18:36.482 4.314 - 4.338: 96.1693% ( 18) 00:18:36.482 4.338 - 4.361: 96.2774% ( 14) 00:18:36.482 4.361 - 4.385: 96.3546% ( 10) 00:18:36.482 4.385 - 4.409: 96.4242% ( 9) 00:18:36.482 4.409 - 4.433: 96.4937% ( 9) 00:18:36.482 4.433 - 4.456: 96.5863% ( 12) 00:18:36.482 4.456 - 4.480: 96.6713% ( 11) 00:18:36.482 4.480 - 4.504: 96.7176% ( 6) 00:18:36.482 4.504 - 4.527: 96.7408% ( 3) 00:18:36.482 4.527 - 4.551: 96.7563% ( 2) 00:18:36.482 4.551 - 4.575: 96.7717% ( 2) 00:18:36.482 4.575 - 4.599: 96.7794% ( 1) 00:18:36.482 4.599 - 4.622: 96.8026% ( 3) 00:18:36.482 4.622 - 4.646: 96.8489% ( 6) 00:18:36.482 4.670 - 4.693: 96.8567% ( 1) 00:18:36.482 4.693 - 4.717: 96.8644% ( 1) 00:18:36.482 4.717 - 4.741: 96.8798% ( 2) 00:18:36.482 4.741 - 4.764: 96.9107% ( 4) 00:18:36.482 4.764 - 4.788: 96.9571% ( 6) 00:18:36.482 4.788 - 4.812: 96.9880% ( 4) 00:18:36.482 4.812 - 4.836: 97.0420% ( 7) 00:18:36.482 4.836 - 4.859: 97.0575% ( 2) 00:18:36.482 4.859 - 4.883: 97.1270% ( 9) 00:18:36.482 4.883 - 4.907: 97.2351% ( 14) 00:18:36.482 4.907 - 4.930: 97.2505% ( 2) 00:18:36.482 4.930 - 4.954: 97.2737% ( 3) 00:18:36.482 4.954 - 4.978: 97.3046% ( 4) 00:18:36.482 4.978 - 5.001: 97.3664% ( 8) 00:18:36.482 5.001 - 5.025: 97.4050% ( 5) 00:18:36.482 5.025 - 5.049: 97.4282% ( 3) 00:18:36.482 5.049 - 5.073: 97.5131% ( 11) 00:18:36.482 5.073 - 5.096: 97.5595% ( 6) 00:18:36.482 5.096 - 5.120: 97.5981% ( 5) 00:18:36.482 5.120 - 5.144: 97.6521% ( 7) 00:18:36.482 5.144 - 5.167: 97.7062% ( 7) 00:18:36.482 5.167 - 5.191: 97.7371% ( 4) 00:18:36.482 5.191 - 5.215: 97.7680% ( 4) 00:18:36.482 5.215 - 5.239: 97.8066% ( 5) 00:18:36.482 5.239 - 5.262: 97.8298% ( 3) 00:18:36.482 5.262 - 5.286: 97.8530% ( 3) 00:18:36.482 5.286 - 5.310: 97.8684% ( 2) 00:18:36.482 5.333 - 5.357: 97.8993% ( 4) 00:18:36.482 5.357 - 5.381: 97.9070% ( 1) 00:18:36.482 5.381 - 5.404: 97.9302% ( 3) 00:18:36.482 5.404 - 5.428: 97.9611% ( 4) 00:18:36.482 5.428 - 5.452: 97.9842% ( 3) 00:18:36.482 5.452 - 5.476: 97.9920% ( 1) 00:18:36.482 5.476 - 5.499: 98.0151% ( 3) 00:18:36.482 5.499 - 5.523: 98.0383% ( 3) 00:18:36.482 5.523 - 5.547: 98.0538% ( 2) 00:18:36.482 5.547 - 5.570: 98.0692% ( 2) 00:18:36.482 5.594 - 5.618: 98.0769% ( 1) 00:18:36.482 5.618 - 5.641: 98.0924% ( 2) 00:18:36.482 5.665 - 5.689: 98.1078% ( 2) 00:18:36.482 5.689 - 5.713: 98.1155% ( 1) 00:18:36.482 5.713 - 5.736: 98.1233% ( 1) 00:18:36.482 5.760 - 5.784: 98.1310% ( 1) 00:18:36.482 5.807 - 5.831: 98.1387% ( 1) 00:18:36.482 5.902 - 5.926: 98.1619% ( 3) 00:18:36.482 5.926 - 5.950: 98.1696% ( 1) 00:18:36.482 6.044 - 6.068: 98.1773% ( 1) 00:18:36.482 6.163 - 6.210: 98.1850% ( 1) 00:18:36.482 6.258 - 6.305: 98.1928% ( 1) 00:18:36.482 6.353 - 6.400: 98.2005% ( 1) 00:18:36.482 6.400 - 6.447: 98.2082% ( 1) 00:18:36.482 6.542 - 6.590: 98.2237% ( 2) 00:18:36.482 6.590 - 6.637: 98.2314% ( 1) 00:18:36.482 6.684 - 6.732: 98.2391% ( 1) 00:18:36.482 6.874 - 6.921: 98.2468% ( 1) 00:18:36.482 6.921 - 6.969: 98.2546% ( 1) 00:18:36.482 6.969 - 7.016: 98.2623% ( 1) 00:18:36.482 7.064 - 7.111: 98.2700% ( 1) 00:18:36.482 7.111 - 7.159: 98.2777% ( 1) 00:18:36.482 7.206 - 7.253: 98.2854% ( 1) 00:18:36.482 7.443 - 7.490: 98.2932% ( 1) 00:18:36.482 7.490 - 7.538: 98.3009% ( 1) 00:18:36.482 7.633 - 7.680: 98.3086% ( 1) 00:18:36.482 7.775 - 7.822: 98.3241% ( 2) 00:18:36.482 7.917 - 7.964: 98.3395% ( 2) 00:18:36.482 8.012 - 8.059: 98.3472% ( 1) 00:18:36.482 8.107 - 8.154: 98.3550% ( 1) 00:18:36.482 8.154 - 8.201: 98.3627% ( 1) 00:18:36.482 8.391 - 8.439: 98.3704% ( 1) 00:18:36.482 8.439 - 8.486: 98.3781% ( 1) 00:18:36.482 8.486 - 8.533: 98.3859% ( 1) 00:18:36.482 8.533 - 8.581: 98.3936% ( 1) 00:18:36.482 8.581 - 8.628: 98.4090% ( 2) 00:18:36.482 8.723 - 8.770: 98.4245% ( 2) 00:18:36.482 8.818 - 8.865: 98.4322% ( 1) 00:18:36.482 8.865 - 8.913: 98.4399% ( 1) 00:18:36.482 8.913 - 8.960: 98.4554% ( 2) 00:18:36.482 9.150 - 9.197: 98.4708% ( 2) 00:18:36.482 9.197 - 9.244: 98.4785% ( 1) 00:18:36.482 9.292 - 9.339: 98.4863% ( 1) 00:18:36.482 9.481 - 9.529: 98.4940% ( 1) 00:18:36.482 9.529 - 9.576: 98.5017% ( 1) 00:18:36.482 9.624 - 9.671: 98.5094% ( 1) 00:18:36.482 9.671 - 9.719: 98.5249% ( 2) 00:18:36.482 9.719 - 9.766: 98.5403% ( 2) 00:18:36.482 9.766 - 9.813: 98.5480% ( 1) 00:18:36.482 9.813 - 9.861: 98.5558% ( 1) 00:18:36.482 9.861 - 9.908: 98.5635% ( 1) 00:18:36.482 10.003 - 10.050: 98.5712% ( 1) 00:18:36.482 10.193 - 10.240: 98.5789% ( 1) 00:18:36.482 10.335 - 10.382: 98.5867% ( 1) 00:18:36.482 10.382 - 10.430: 98.6021% ( 2) 00:18:36.482 10.430 - 10.477: 98.6098% ( 1) 00:18:36.482 10.477 - 10.524: 98.6253% ( 2) 00:18:36.482 10.667 - 10.714: 98.6330% ( 1) 00:18:36.482 10.761 - 10.809: 98.6407% ( 1) 00:18:36.482 10.809 - 10.856: 98.6484% ( 1) 00:18:36.482 10.856 - 10.904: 98.6562% ( 1) 00:18:36.482 10.904 - 10.951: 98.6639% ( 1) 00:18:36.482 11.046 - 11.093: 98.6716% ( 1) 00:18:36.482 11.141 - 11.188: 98.6793% ( 1) 00:18:36.482 11.188 - 11.236: 98.6871% ( 1) 00:18:36.482 11.804 - 11.852: 98.6948% ( 1) 00:18:36.482 11.852 - 11.899: 98.7025% ( 1) 00:18:36.482 12.136 - 12.231: 98.7102% ( 1) 00:18:36.482 12.516 - 12.610: 98.7179% ( 1) 00:18:36.482 12.705 - 12.800: 98.7257% ( 1) 00:18:36.482 12.895 - 12.990: 98.7411% ( 2) 00:18:36.482 13.084 - 13.179: 98.7488% ( 1) 00:18:36.482 13.179 - 13.274: 98.7566% ( 1) 00:18:36.482 13.369 - 13.464: 98.7720% ( 2) 00:18:36.482 13.464 - 13.559: 98.7797% ( 1) 00:18:36.482 13.653 - 13.748: 98.7875% ( 1) 00:18:36.482 13.843 - 13.938: 98.8029% ( 2) 00:18:36.482 14.033 - 14.127: 98.8106% ( 1) 00:18:36.482 14.222 - 14.317: 98.8184% ( 1) 00:18:36.482 14.412 - 14.507: 98.8338% ( 2) 00:18:36.482 14.886 - 14.981: 98.8415% ( 1) 00:18:36.482 15.170 - 15.265: 98.8492% ( 1) 00:18:36.482 15.265 - 15.360: 98.8570% ( 1) 00:18:36.482 17.067 - 17.161: 98.8647% ( 1) 00:18:36.482 17.256 - 17.351: 98.8724% ( 1) 00:18:36.482 17.351 - 17.446: 98.8956% ( 3) 00:18:36.482 17.446 - 17.541: 98.9110% ( 2) 00:18:36.482 17.541 - 17.636: 98.9496% ( 5) 00:18:36.482 17.636 - 17.730: 98.9651% ( 2) 00:18:36.482 17.730 - 17.825: 98.9883% ( 3) 00:18:36.482 17.825 - 17.920: 99.0423% ( 7) 00:18:36.482 17.920 - 18.015: 99.1041% ( 8) 00:18:36.482 18.015 - 18.110: 99.1582% ( 7) 00:18:36.482 18.110 - 18.204: 99.2045% ( 6) 00:18:36.483 18.204 - 18.299: 99.3281% ( 16) 00:18:36.483 18.299 - 18.394: 99.3821% ( 7) 00:18:36.483 18.394 - 18.489: 99.4825% ( 13) 00:18:36.483 18.489 - 18.584: 99.5443% ( 8) 00:18:36.483 18.584 - 18.679: 99.5829% ( 5) 00:18:36.483 18.679 - 18.773: 99.6216% ( 5) 00:18:36.483 18.773 - 18.868: 99.6679% ( 6) 00:18:36.483 18.868 - 18.963: 99.6756% ( 1) 00:18:36.483 18.963 - 19.058: 99.7065% ( 4) 00:18:36.483 19.058 - 19.153: 99.7297% ( 3) 00:18:36.483 19.153 - 19.247: 99.7451% ( 2) 00:18:36.483 19.247 - 19.342: 99.7683% ( 3) 00:18:36.483 19.342 - 19.437: 99.7838% ( 2) 00:18:36.483 19.437 - 19.532: 99.7992% ( 2) 00:18:36.483 19.627 - 19.721: 99.8146% ( 2) 00:18:36.483 20.006 - 20.101: 99.8224% ( 1) 00:18:36.483 20.670 - 20.764: 99.8301% ( 1) 00:18:36.483 21.049 - 21.144: 99.8378% ( 1) 00:18:36.483 22.092 - 22.187: 99.8455% ( 1) 00:18:36.483 22.566 - 22.661: 99.8533% ( 1) 00:18:36.483 24.178 - 24.273: 99.8610% ( 1) 00:18:36.483 24.652 - 24.841: 99.8687% ( 1) 00:18:36.483 24.841 - 25.031: 99.8764% ( 1) 00:18:36.483 25.221 - 25.410: 99.8842% ( 1) 00:18:36.483 25.790 - 25.979: 99.8919% ( 1) 00:18:36.483 26.359 - 26.548: 99.8996% ( 1) 00:18:36.483 3094.756 - 3106.892: 99.9073% ( 1) 00:18:36.483 3980.705 - 4004.978: 99.9537% ( 6) 00:18:36.483 4004.978 - 4029.250: 99.9846% ( 4) 00:18:36.483 4975.881 - 5000.154: 99.9923% ( 1) 00:18:36.483 5000.154 - 5024.427: 100.0000% ( 1) 00:18:36.483 00:18:36.483 Complete histogram 00:18:36.483 ================== 00:18:36.483 Range in us Cumulative Count 00:18:36.483 2.062 - 2.074: 3.3905% ( 439) 00:18:36.483 2.074 - 2.086: 37.9827% ( 4479) 00:18:36.483 2.086 - 2.098: 44.8177% ( 885) 00:18:36.483 2.098 - 2.110: 49.1891% ( 566) 00:18:36.483 2.110 - 2.121: 59.0748% ( 1280) 00:18:36.483 2.121 - 2.133: 60.9592% ( 244) 00:18:36.483 2.133 - 2.145: 65.6086% ( 602) 00:18:36.483 2.145 - 2.157: 74.5057% ( 1152) 00:18:36.483 2.157 - 2.169: 75.5483% ( 135) 00:18:36.483 2.169 - 2.181: 78.1588% ( 338) 00:18:36.483 2.181 - 2.193: 81.2326% ( 398) 00:18:36.483 2.193 - 2.204: 81.8736% ( 83) 00:18:36.483 2.204 - 2.216: 83.5959% ( 223) 00:18:36.483 2.216 - 2.228: 88.5851% ( 646) 00:18:36.483 2.228 - 2.240: 90.2379% ( 214) 00:18:36.483 2.240 - 2.252: 91.2728% ( 134) 00:18:36.483 2.252 - 2.264: 92.6089% ( 173) 00:18:36.483 2.264 - 2.276: 92.9178% ( 40) 00:18:36.483 2.276 - 2.287: 93.2499% ( 43) 00:18:36.483 2.287 - 2.299: 93.9218% ( 87) 00:18:36.483 2.299 - 2.311: 94.5320% ( 79) 00:18:36.483 2.311 - 2.323: 94.7482% ( 28) 00:18:36.483 2.323 - 2.335: 94.7791% ( 4) 00:18:36.483 2.335 - 2.347: 94.7946% ( 2) 00:18:36.483 2.347 - 2.359: 94.8563% ( 8) 00:18:36.483 2.359 - 2.370: 95.0417% ( 24) 00:18:36.483 2.370 - 2.382: 95.3661% ( 42) 00:18:36.483 2.382 - 2.394: 95.8449% ( 62) 00:18:36.483 2.394 - 2.406: 96.1925% ( 45) 00:18:36.483 2.406 - 2.418: 96.4551% ( 34) 00:18:36.483 2.418 - 2.430: 96.7099% ( 33) 00:18:36.483 2.430 - 2.441: 96.9262% ( 28) 00:18:36.483 2.441 - 2.453: 97.0575% ( 17) 00:18:36.483 2.453 - 2.465: 97.2351% ( 23) 00:18:36.483 2.465 - 2.477: 97.3432% ( 14) 00:18:36.483 2.477 - 2.489: 97.4822% ( 18) 00:18:36.483 2.489 - 2.501: 97.5904% ( 14) 00:18:36.483 2.501 - 2.513: 97.6676% ( 10) 00:18:36.483 2.513 - 2.524: 97.7448% ( 10) 00:18:36.483 2.524 - 2.536: 97.7525% ( 1) 00:18:36.483 2.536 - 2.548: 97.7757% ( 3) 00:18:36.483 2.548 - 2.560: 97.7989% ( 3) 00:18:36.483 2.560 - 2.572: 97.8530% ( 7) 00:18:36.483 2.572 - 2.584: 97.8684% ( 2) 00:18:36.483 2.596 - 2.607: 97.8838% ( 2) 00:18:36.483 2.607 - 2.619: 97.8916% ( 1) 00:18:36.483 2.619 - 2.631: 97.9070% ( 2) 00:18:36.483 2.631 - 2.643: 97.9379% ( 4) 00:18:36.483 2.655 - 2.667: 97.9842% ( 6) 00:18:36.483 2.667 - 2.679: 97.9920% ( 1) 00:18:36.483 2.679 - 2.690: 98.0074% ( 2) 00:18:36.483 2.702 - 2.714: 98.0306% ( 3) 00:18:36.483 2.714 - 2.726: 98.0460% ( 2) 00:18:36.483 2.726 - 2.738: 98.0615% ( 2) 00:18:36.483 2.738 - 2.750: 98.0692% ( 1) 00:18:36.483 2.761 - 2.773: 98.0769% ( 1) 00:18:36.483 2.773 - 2.785: 98.0846% ( 1) 00:18:36.483 2.785 - 2.797: 98.0924% ( 1) 00:18:36.483 2.797 - 2.809: 98.1001% ( 1) 00:18:36.483 2.809 - 2.821: 98.1078% ( 1) 00:18:36.483 2.821 - 2.833: 98.1233% ( 2) 00:18:36.483 2.833 - 2.844: 98.1310% ( 1) 00:18:36.483 2.844 - 2.856: 98.1464% ( 2) 00:18:36.483 2.856 - 2.868: 98.1542% ( 1) 00:18:36.483 2.880 - 2.892: 98.1619% ( 1) 00:18:36.483 2.892 - 2.904: 98.1928% ( 4) 00:18:36.483 2.916 - 2.927: 98.2005% ( 1) 00:18:36.483 2.927 - 2.939: 98.2082% ( 1) 00:18:36.483 2.939 - 2.951: 98.2159% ( 1) 00:18:36.483 2.963 - 2.975: 98.2237% ( 1) 00:18:36.483 2.975 - 2.987: 98.2314% ( 1) 00:18:36.483 2.987 - 2.999: 98.2468% ( 2) 00:18:36.483 2.999 - 3.010: 98.2546% ( 1) 00:18:36.483 3.010 - 3.022: 98.2623% ( 1) 00:18:36.483 3.022 - 3.034: 98.2777% ( 2) 00:18:36.483 3.034 - 3.058: 98.3163% ( 5) 00:18:36.483 3.058 - 3.081: 98.3395% ( 3) 00:18:36.483 3.081 - 3.105: 98.3472% ( 1) 00:18:36.483 3.105 - 3.129: 98.3550% ( 1) 00:18:36.483 3.129 - 3.153: 98.3704% ( 2) 00:18:36.483 3.153 - 3.176: 98.3859% ( 2) 00:18:36.483 3.176 - 3.200: 98.3936% ( 1) 00:18:36.483 3.200 - 3.224: 98.4090% ( 2) 00:18:36.483 3.224 - 3.247: 98.4245% ( 2) 00:18:36.483 3.247 - 3.271: 98.4399% ( 2) 00:18:36.483 3.271 - 3.295: 98.4476% ( 1) 00:18:36.483 3.319 - 3.342: 98.4554% ( 1) 00:18:36.483 3.342 - 3.366: 98.4631% ( 1) 00:18:36.483 3.366 - 3.390: 98.4708% ( 1) 00:18:36.483 3.390 - 3.413: 98.4785% ( 1) 00:18:36.483 3.413 - 3.437: 98.4863% ( 1) 00:18:36.483 3.437 - 3.461: 98.5017% ( 2) 00:18:36.483 3.461 - 3.484: 98.5171% ( 2) 00:18:36.483 3.508 - 3.532: 98.5403% ( 3) 00:18:36.483 3.556 - 3.579: 98.5558% ( 2) 00:18:36.483 3.579 - 3.603: 98.5712% ( 2) 00:18:36.483 3.627 - 3.650: 98.5789% ( 1) 00:18:36.483 3.650 - 3.674: 98.5867% ( 1) 00:18:36.483 3.674 - 3.698: 98.6021% ( 2) 00:18:36.483 3.698 - 3.721: 98.6098% ( 1) 00:18:36.483 3.721 - 3.745: 98.6175% ( 1) 00:18:36.483 3.793 - 3.816: 98.6253% ( 1) 00:18:36.483 3.840 - 3.864: 98.6407% ( 2) 00:18:36.483 4.172 - 4.196: 98.6484% ( 1) 00:18:36.483 4.338 - 4.361: 98.6562% ( 1) 00:18:36.483 4.859 - 4.883: 98.6639% ( 1) 00:18:36.483 5.476 - 5.499: 98.6716% ( 1) 00:18:36.483 5.594 - 5.618: 98.6793% ( 1) 00:18:36.483 5.641 - 5.665: 98.6871% ( 1) 00:18:36.483 5.736 - 5.760: 98.6948% ( 1) 00:18:36.483 5.855 - 5.879: 98.7025% ( 1) 00:18:36.483 5.879 - 5.902: 98.7102% ( 1) 00:18:36.483 6.068 - 6.116: 98.7179% ( 1) 00:18:36.483 6.684 - 6.732: 98.7257% ( 1) 00:18:36.483 7.064 - 7.111: 98.7334% ( 1) 00:18:36.483 7.301 - 7.348: 98.7411% ( 1) 00:18:36.483 7.727 - 7.775: 98.7488% ( 1) 00:18:36.483 7.917 - 7.964: 98.7566% ( 1) 00:18:36.483 7.964 - 8.012: 98.7643% ( 1) 00:18:36.483 8.486 - 8.533: 98.7720% ( 1) 00:18:36.483 12.231 - 12.326: 98.7797% ( 1) 00:18:36.483 14.317 - 14.412: 98.7875% ( 1) 00:18:36.483 15.550 - 15.644: 98.8029% ( 2) 00:18:36.483 15.644 - 15.739: 98.8106% ( 1) 00:18:36.483 15.739 - 15.834: 98.8338% ( 3) 00:18:36.483 15.834 - 15.929: 98.8570% ( 3) 00:18:36.483 15.929 - 16.024: 98.8801% ( 3) 00:18:36.483 16.024 - 16.119: 98.8956% ( 2) 00:18:36.483 16.119 - 16.213: 98.9188%[2024-07-23 18:06:43.723034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.483 ( 3) 00:18:36.483 16.213 - 16.308: 98.9496% ( 4) 00:18:36.483 16.308 - 16.403: 98.9728% ( 3) 00:18:36.483 16.403 - 16.498: 99.0269% ( 7) 00:18:36.483 16.498 - 16.593: 99.0732% ( 6) 00:18:36.483 16.593 - 16.687: 99.1427% ( 9) 00:18:36.483 16.687 - 16.782: 99.1582% ( 2) 00:18:36.483 16.782 - 16.877: 99.2045% ( 6) 00:18:36.483 16.877 - 16.972: 99.2200% ( 2) 00:18:36.483 16.972 - 17.067: 99.2277% ( 1) 00:18:36.483 17.067 - 17.161: 99.2586% ( 4) 00:18:36.483 17.161 - 17.256: 99.2740% ( 2) 00:18:36.483 17.256 - 17.351: 99.2817% ( 1) 00:18:36.483 17.351 - 17.446: 99.2972% ( 2) 00:18:36.483 17.730 - 17.825: 99.3049% ( 1) 00:18:36.483 17.825 - 17.920: 99.3281% ( 3) 00:18:36.483 17.920 - 18.015: 99.3358% ( 1) 00:18:36.484 18.110 - 18.204: 99.3590% ( 3) 00:18:36.484 18.394 - 18.489: 99.3667% ( 1) 00:18:36.484 18.773 - 18.868: 99.3744% ( 1) 00:18:36.484 21.049 - 21.144: 99.3821% ( 1) 00:18:36.484 3021.938 - 3034.074: 99.3899% ( 1) 00:18:36.484 3034.074 - 3046.210: 99.3976% ( 1) 00:18:36.484 3980.705 - 4004.978: 99.7451% ( 45) 00:18:36.484 4004.978 - 4029.250: 99.9923% ( 32) 00:18:36.484 4975.881 - 5000.154: 100.0000% ( 1) 00:18:36.484 00:18:36.484 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:36.484 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:36.484 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:36.484 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:36.484 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:36.484 [ 00:18:36.484 { 00:18:36.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:36.484 "subtype": "Discovery", 00:18:36.484 "listen_addresses": [], 00:18:36.484 "allow_any_host": true, 00:18:36.484 "hosts": [] 00:18:36.484 }, 00:18:36.484 { 00:18:36.484 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:36.484 "subtype": "NVMe", 00:18:36.484 "listen_addresses": [ 00:18:36.484 { 00:18:36.484 "trtype": "VFIOUSER", 00:18:36.484 "adrfam": "IPv4", 00:18:36.484 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:36.484 "trsvcid": "0" 00:18:36.484 } 00:18:36.484 ], 00:18:36.484 "allow_any_host": true, 00:18:36.484 "hosts": [], 00:18:36.484 "serial_number": "SPDK1", 00:18:36.484 "model_number": "SPDK bdev Controller", 00:18:36.484 "max_namespaces": 32, 00:18:36.484 "min_cntlid": 1, 00:18:36.484 "max_cntlid": 65519, 00:18:36.484 "namespaces": [ 00:18:36.484 { 00:18:36.484 "nsid": 1, 00:18:36.484 "bdev_name": "Malloc1", 00:18:36.484 "name": "Malloc1", 00:18:36.484 "nguid": "17C6F7DFD1E04290919C62AC064F72B1", 00:18:36.484 "uuid": "17c6f7df-d1e0-4290-919c-62ac064f72b1" 00:18:36.484 } 00:18:36.484 ] 00:18:36.484 }, 00:18:36.484 { 00:18:36.484 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:36.484 "subtype": "NVMe", 00:18:36.484 "listen_addresses": [ 00:18:36.484 { 00:18:36.484 "trtype": "VFIOUSER", 00:18:36.484 "adrfam": "IPv4", 00:18:36.484 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:36.484 "trsvcid": "0" 00:18:36.484 } 00:18:36.484 ], 00:18:36.484 "allow_any_host": true, 00:18:36.484 "hosts": [], 00:18:36.484 "serial_number": "SPDK2", 00:18:36.484 "model_number": "SPDK bdev Controller", 00:18:36.484 "max_namespaces": 32, 00:18:36.484 "min_cntlid": 1, 00:18:36.484 "max_cntlid": 65519, 00:18:36.484 "namespaces": [ 00:18:36.484 { 00:18:36.484 "nsid": 1, 00:18:36.484 "bdev_name": "Malloc2", 00:18:36.484 "name": "Malloc2", 00:18:36.484 "nguid": "6643FE2615084F2F84AF164FBC8268F1", 00:18:36.484 "uuid": "6643fe26-1508-4f2f-84af-164fbc8268f1" 00:18:36.484 } 00:18:36.484 ] 00:18:36.484 } 00:18:36.484 ] 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2347374 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:36.484 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:36.484 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.742 [2024-07-23 18:06:44.171875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.742 Malloc3 00:18:36.742 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:36.999 [2024-07-23 18:06:44.532507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.999 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:36.999 Asynchronous Event Request test 00:18:36.999 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.999 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.999 Registering asynchronous event callbacks... 00:18:36.999 Starting namespace attribute notice tests for all controllers... 00:18:36.999 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:36.999 aer_cb - Changed Namespace 00:18:36.999 Cleaning up... 00:18:37.257 [ 00:18:37.257 { 00:18:37.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.257 "subtype": "Discovery", 00:18:37.257 "listen_addresses": [], 00:18:37.257 "allow_any_host": true, 00:18:37.257 "hosts": [] 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:37.257 "subtype": "NVMe", 00:18:37.257 "listen_addresses": [ 00:18:37.257 { 00:18:37.257 "trtype": "VFIOUSER", 00:18:37.257 "adrfam": "IPv4", 00:18:37.257 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:37.257 "trsvcid": "0" 00:18:37.257 } 00:18:37.257 ], 00:18:37.257 "allow_any_host": true, 00:18:37.257 "hosts": [], 00:18:37.257 "serial_number": "SPDK1", 00:18:37.257 "model_number": "SPDK bdev Controller", 00:18:37.257 "max_namespaces": 32, 00:18:37.257 "min_cntlid": 1, 00:18:37.257 "max_cntlid": 65519, 00:18:37.257 "namespaces": [ 00:18:37.257 { 00:18:37.257 "nsid": 1, 00:18:37.257 "bdev_name": "Malloc1", 00:18:37.257 "name": "Malloc1", 00:18:37.257 "nguid": "17C6F7DFD1E04290919C62AC064F72B1", 00:18:37.257 "uuid": "17c6f7df-d1e0-4290-919c-62ac064f72b1" 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "nsid": 2, 00:18:37.257 "bdev_name": "Malloc3", 00:18:37.257 "name": "Malloc3", 00:18:37.257 "nguid": "B062EA41A7874F129763FB0B3D610ADA", 00:18:37.257 "uuid": "b062ea41-a787-4f12-9763-fb0b3d610ada" 00:18:37.257 } 00:18:37.257 ] 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:37.257 "subtype": "NVMe", 00:18:37.257 "listen_addresses": [ 00:18:37.257 { 00:18:37.257 "trtype": "VFIOUSER", 00:18:37.257 "adrfam": "IPv4", 00:18:37.257 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:37.257 "trsvcid": "0" 00:18:37.257 } 00:18:37.257 ], 00:18:37.257 "allow_any_host": true, 00:18:37.257 "hosts": [], 00:18:37.257 "serial_number": "SPDK2", 00:18:37.257 "model_number": "SPDK bdev Controller", 00:18:37.257 "max_namespaces": 32, 00:18:37.257 "min_cntlid": 1, 00:18:37.257 "max_cntlid": 65519, 00:18:37.257 "namespaces": [ 00:18:37.257 { 00:18:37.257 "nsid": 1, 00:18:37.257 "bdev_name": "Malloc2", 00:18:37.257 "name": "Malloc2", 00:18:37.257 "nguid": "6643FE2615084F2F84AF164FBC8268F1", 00:18:37.257 "uuid": "6643fe26-1508-4f2f-84af-164fbc8268f1" 00:18:37.257 } 00:18:37.257 ] 00:18:37.257 } 00:18:37.257 ] 00:18:37.257 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2347374 00:18:37.257 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.257 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:37.257 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:37.257 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:37.257 [2024-07-23 18:06:44.817243] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:18:37.257 [2024-07-23 18:06:44.817290] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347509 ] 00:18:37.257 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.257 [2024-07-23 18:06:44.853446] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:37.257 [2024-07-23 18:06:44.859602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.257 [2024-07-23 18:06:44.859651] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb381a85000 00:18:37.257 [2024-07-23 18:06:44.860597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.861610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.862610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.863637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.864643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.865651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.866656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.867658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.257 [2024-07-23 18:06:44.868661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.257 [2024-07-23 18:06:44.868697] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb380839000 00:18:37.257 [2024-07-23 18:06:44.869814] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.257 [2024-07-23 18:06:44.887591] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:37.257 [2024-07-23 18:06:44.887631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:37.257 [2024-07-23 18:06:44.889746] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:37.257 [2024-07-23 18:06:44.889799] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:37.257 [2024-07-23 18:06:44.889889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:37.257 [2024-07-23 18:06:44.889913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:37.257 [2024-07-23 18:06:44.889923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:37.257 [2024-07-23 18:06:44.890754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:37.257 [2024-07-23 18:06:44.890779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:37.257 [2024-07-23 18:06:44.890793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:37.257 [2024-07-23 18:06:44.891783] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:37.257 [2024-07-23 18:06:44.891803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:37.257 [2024-07-23 18:06:44.891816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:37.257 [2024-07-23 18:06:44.892770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:37.257 [2024-07-23 18:06:44.892790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:37.257 [2024-07-23 18:06:44.893776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:37.257 [2024-07-23 18:06:44.893797] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:37.257 [2024-07-23 18:06:44.893806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:37.257 [2024-07-23 18:06:44.893817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:37.257 [2024-07-23 18:06:44.893926] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:37.257 [2024-07-23 18:06:44.893934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:37.257 [2024-07-23 18:06:44.893942] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:37.257 [2024-07-23 18:06:44.894784] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:37.258 [2024-07-23 18:06:44.895785] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:37.258 [2024-07-23 18:06:44.896796] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:37.258 [2024-07-23 18:06:44.897791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:37.258 [2024-07-23 18:06:44.897876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:37.258 [2024-07-23 18:06:44.898807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:37.258 [2024-07-23 18:06:44.898830] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:37.258 [2024-07-23 18:06:44.898840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.898863] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:37.258 [2024-07-23 18:06:44.898879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.898904] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.258 [2024-07-23 18:06:44.898913] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.258 [2024-07-23 18:06:44.898919] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.258 [2024-07-23 18:06:44.898940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.258 [2024-07-23 18:06:44.905333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:37.258 [2024-07-23 18:06:44.905357] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:37.258 [2024-07-23 18:06:44.905380] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:37.258 [2024-07-23 18:06:44.905388] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:37.258 [2024-07-23 18:06:44.905396] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:37.258 [2024-07-23 18:06:44.905404] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:37.258 [2024-07-23 18:06:44.905413] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:37.258 [2024-07-23 18:06:44.905421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.905436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.905456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:37.258 [2024-07-23 18:06:44.913347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:37.258 [2024-07-23 18:06:44.913382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.258 [2024-07-23 18:06:44.913398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.258 [2024-07-23 18:06:44.913410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.258 [2024-07-23 18:06:44.913422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.258 [2024-07-23 18:06:44.913434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.913460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:37.258 [2024-07-23 18:06:44.913478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.921333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.921355] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:37.517 [2024-07-23 18:06:44.921364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.921380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.921392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.921406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.929341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.929417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.929434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.929447] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:37.517 [2024-07-23 18:06:44.929456] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:37.517 [2024-07-23 18:06:44.929462] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.929472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.937341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.937366] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:37.517 [2024-07-23 18:06:44.937383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.937398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.937411] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.517 [2024-07-23 18:06:44.937419] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.517 [2024-07-23 18:06:44.937425] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.937434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.945343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.945373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.945389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.945402] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.517 [2024-07-23 18:06:44.945410] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.517 [2024-07-23 18:06:44.945420] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.945430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.953340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.953362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953432] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:37.517 [2024-07-23 18:06:44.953440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:37.517 [2024-07-23 18:06:44.953448] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:37.517 [2024-07-23 18:06:44.953476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.961325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.961352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.969354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.977342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.977367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.985342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.985375] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:37.517 [2024-07-23 18:06:44.985387] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:37.517 [2024-07-23 18:06:44.985393] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:37.517 [2024-07-23 18:06:44.985399] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:37.517 [2024-07-23 18:06:44.985405] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:37.517 [2024-07-23 18:06:44.985414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:37.517 [2024-07-23 18:06:44.985430] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:37.517 [2024-07-23 18:06:44.985439] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:37.517 [2024-07-23 18:06:44.985445] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.985454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.985464] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:37.517 [2024-07-23 18:06:44.985472] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.517 [2024-07-23 18:06:44.985478] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.985486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.985499] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:37.517 [2024-07-23 18:06:44.985506] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:37.517 [2024-07-23 18:06:44.985512] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.517 [2024-07-23 18:06:44.985521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:37.517 [2024-07-23 18:06:44.993343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.993370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.993388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:37.517 [2024-07-23 18:06:44.993400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:37.517 ===================================================== 00:18:37.517 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:37.517 ===================================================== 00:18:37.517 Controller Capabilities/Features 00:18:37.517 ================================ 00:18:37.517 Vendor ID: 4e58 00:18:37.517 Subsystem Vendor ID: 4e58 00:18:37.517 Serial Number: SPDK2 00:18:37.517 Model Number: SPDK bdev Controller 00:18:37.517 Firmware Version: 24.09 00:18:37.517 Recommended Arb Burst: 6 00:18:37.517 IEEE OUI Identifier: 8d 6b 50 00:18:37.517 Multi-path I/O 00:18:37.517 May have multiple subsystem ports: Yes 00:18:37.517 May have multiple controllers: Yes 00:18:37.517 Associated with SR-IOV VF: No 00:18:37.517 Max Data Transfer Size: 131072 00:18:37.517 Max Number of Namespaces: 32 00:18:37.517 Max Number of I/O Queues: 127 00:18:37.517 NVMe Specification Version (VS): 1.3 00:18:37.517 NVMe Specification Version (Identify): 1.3 00:18:37.518 Maximum Queue Entries: 256 00:18:37.518 Contiguous Queues Required: Yes 00:18:37.518 Arbitration Mechanisms Supported 00:18:37.518 Weighted Round Robin: Not Supported 00:18:37.518 Vendor Specific: Not Supported 00:18:37.518 Reset Timeout: 15000 ms 00:18:37.518 Doorbell Stride: 4 bytes 00:18:37.518 NVM Subsystem Reset: Not Supported 00:18:37.518 Command Sets Supported 00:18:37.518 NVM Command Set: Supported 00:18:37.518 Boot Partition: Not Supported 00:18:37.518 Memory Page Size Minimum: 4096 bytes 00:18:37.518 Memory Page Size Maximum: 4096 bytes 00:18:37.518 Persistent Memory Region: Not Supported 00:18:37.518 Optional Asynchronous Events Supported 00:18:37.518 Namespace Attribute Notices: Supported 00:18:37.518 Firmware Activation Notices: Not Supported 00:18:37.518 ANA Change Notices: Not Supported 00:18:37.518 PLE Aggregate Log Change Notices: Not Supported 00:18:37.518 LBA Status Info Alert Notices: Not Supported 00:18:37.518 EGE Aggregate Log Change Notices: Not Supported 00:18:37.518 Normal NVM Subsystem Shutdown event: Not Supported 00:18:37.518 Zone Descriptor Change Notices: Not Supported 00:18:37.518 Discovery Log Change Notices: Not Supported 00:18:37.518 Controller Attributes 00:18:37.518 128-bit Host Identifier: Supported 00:18:37.518 Non-Operational Permissive Mode: Not Supported 00:18:37.518 NVM Sets: Not Supported 00:18:37.518 Read Recovery Levels: Not Supported 00:18:37.518 Endurance Groups: Not Supported 00:18:37.518 Predictable Latency Mode: Not Supported 00:18:37.518 Traffic Based Keep ALive: Not Supported 00:18:37.518 Namespace Granularity: Not Supported 00:18:37.518 SQ Associations: Not Supported 00:18:37.518 UUID List: Not Supported 00:18:37.518 Multi-Domain Subsystem: Not Supported 00:18:37.518 Fixed Capacity Management: Not Supported 00:18:37.518 Variable Capacity Management: Not Supported 00:18:37.518 Delete Endurance Group: Not Supported 00:18:37.518 Delete NVM Set: Not Supported 00:18:37.518 Extended LBA Formats Supported: Not Supported 00:18:37.518 Flexible Data Placement Supported: Not Supported 00:18:37.518 00:18:37.518 Controller Memory Buffer Support 00:18:37.518 ================================ 00:18:37.518 Supported: No 00:18:37.518 00:18:37.518 Persistent Memory Region Support 00:18:37.518 ================================ 00:18:37.518 Supported: No 00:18:37.518 00:18:37.518 Admin Command Set Attributes 00:18:37.518 ============================ 00:18:37.518 Security Send/Receive: Not Supported 00:18:37.518 Format NVM: Not Supported 00:18:37.518 Firmware Activate/Download: Not Supported 00:18:37.518 Namespace Management: Not Supported 00:18:37.518 Device Self-Test: Not Supported 00:18:37.518 Directives: Not Supported 00:18:37.518 NVMe-MI: Not Supported 00:18:37.518 Virtualization Management: Not Supported 00:18:37.518 Doorbell Buffer Config: Not Supported 00:18:37.518 Get LBA Status Capability: Not Supported 00:18:37.518 Command & Feature Lockdown Capability: Not Supported 00:18:37.518 Abort Command Limit: 4 00:18:37.518 Async Event Request Limit: 4 00:18:37.518 Number of Firmware Slots: N/A 00:18:37.518 Firmware Slot 1 Read-Only: N/A 00:18:37.518 Firmware Activation Without Reset: N/A 00:18:37.518 Multiple Update Detection Support: N/A 00:18:37.518 Firmware Update Granularity: No Information Provided 00:18:37.518 Per-Namespace SMART Log: No 00:18:37.518 Asymmetric Namespace Access Log Page: Not Supported 00:18:37.518 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:37.518 Command Effects Log Page: Supported 00:18:37.518 Get Log Page Extended Data: Supported 00:18:37.518 Telemetry Log Pages: Not Supported 00:18:37.518 Persistent Event Log Pages: Not Supported 00:18:37.518 Supported Log Pages Log Page: May Support 00:18:37.518 Commands Supported & Effects Log Page: Not Supported 00:18:37.518 Feature Identifiers & Effects Log Page:May Support 00:18:37.518 NVMe-MI Commands & Effects Log Page: May Support 00:18:37.518 Data Area 4 for Telemetry Log: Not Supported 00:18:37.518 Error Log Page Entries Supported: 128 00:18:37.518 Keep Alive: Supported 00:18:37.518 Keep Alive Granularity: 10000 ms 00:18:37.518 00:18:37.518 NVM Command Set Attributes 00:18:37.518 ========================== 00:18:37.518 Submission Queue Entry Size 00:18:37.518 Max: 64 00:18:37.518 Min: 64 00:18:37.518 Completion Queue Entry Size 00:18:37.518 Max: 16 00:18:37.518 Min: 16 00:18:37.518 Number of Namespaces: 32 00:18:37.518 Compare Command: Supported 00:18:37.518 Write Uncorrectable Command: Not Supported 00:18:37.518 Dataset Management Command: Supported 00:18:37.518 Write Zeroes Command: Supported 00:18:37.518 Set Features Save Field: Not Supported 00:18:37.518 Reservations: Not Supported 00:18:37.518 Timestamp: Not Supported 00:18:37.518 Copy: Supported 00:18:37.518 Volatile Write Cache: Present 00:18:37.518 Atomic Write Unit (Normal): 1 00:18:37.518 Atomic Write Unit (PFail): 1 00:18:37.518 Atomic Compare & Write Unit: 1 00:18:37.518 Fused Compare & Write: Supported 00:18:37.518 Scatter-Gather List 00:18:37.518 SGL Command Set: Supported (Dword aligned) 00:18:37.518 SGL Keyed: Not Supported 00:18:37.518 SGL Bit Bucket Descriptor: Not Supported 00:18:37.518 SGL Metadata Pointer: Not Supported 00:18:37.518 Oversized SGL: Not Supported 00:18:37.518 SGL Metadata Address: Not Supported 00:18:37.518 SGL Offset: Not Supported 00:18:37.518 Transport SGL Data Block: Not Supported 00:18:37.518 Replay Protected Memory Block: Not Supported 00:18:37.518 00:18:37.518 Firmware Slot Information 00:18:37.518 ========================= 00:18:37.518 Active slot: 1 00:18:37.518 Slot 1 Firmware Revision: 24.09 00:18:37.518 00:18:37.518 00:18:37.518 Commands Supported and Effects 00:18:37.518 ============================== 00:18:37.518 Admin Commands 00:18:37.518 -------------- 00:18:37.518 Get Log Page (02h): Supported 00:18:37.518 Identify (06h): Supported 00:18:37.518 Abort (08h): Supported 00:18:37.518 Set Features (09h): Supported 00:18:37.518 Get Features (0Ah): Supported 00:18:37.518 Asynchronous Event Request (0Ch): Supported 00:18:37.518 Keep Alive (18h): Supported 00:18:37.518 I/O Commands 00:18:37.518 ------------ 00:18:37.518 Flush (00h): Supported LBA-Change 00:18:37.518 Write (01h): Supported LBA-Change 00:18:37.518 Read (02h): Supported 00:18:37.518 Compare (05h): Supported 00:18:37.518 Write Zeroes (08h): Supported LBA-Change 00:18:37.518 Dataset Management (09h): Supported LBA-Change 00:18:37.518 Copy (19h): Supported LBA-Change 00:18:37.518 00:18:37.518 Error Log 00:18:37.518 ========= 00:18:37.518 00:18:37.518 Arbitration 00:18:37.518 =========== 00:18:37.518 Arbitration Burst: 1 00:18:37.518 00:18:37.518 Power Management 00:18:37.518 ================ 00:18:37.518 Number of Power States: 1 00:18:37.518 Current Power State: Power State #0 00:18:37.518 Power State #0: 00:18:37.518 Max Power: 0.00 W 00:18:37.518 Non-Operational State: Operational 00:18:37.518 Entry Latency: Not Reported 00:18:37.518 Exit Latency: Not Reported 00:18:37.518 Relative Read Throughput: 0 00:18:37.518 Relative Read Latency: 0 00:18:37.518 Relative Write Throughput: 0 00:18:37.518 Relative Write Latency: 0 00:18:37.518 Idle Power: Not Reported 00:18:37.518 Active Power: Not Reported 00:18:37.518 Non-Operational Permissive Mode: Not Supported 00:18:37.518 00:18:37.518 Health Information 00:18:37.518 ================== 00:18:37.518 Critical Warnings: 00:18:37.518 Available Spare Space: OK 00:18:37.518 Temperature: OK 00:18:37.518 Device Reliability: OK 00:18:37.518 Read Only: No 00:18:37.518 Volatile Memory Backup: OK 00:18:37.518 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:37.518 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:37.518 Available Spare: 0% 00:18:37.518 Available Sp[2024-07-23 18:06:44.993522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:37.518 [2024-07-23 18:06:45.001329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:37.518 [2024-07-23 18:06:45.001384] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:37.518 [2024-07-23 18:06:45.001401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.518 [2024-07-23 18:06:45.001412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.519 [2024-07-23 18:06:45.001422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.519 [2024-07-23 18:06:45.001431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.519 [2024-07-23 18:06:45.001496] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:37.519 [2024-07-23 18:06:45.001516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:37.519 [2024-07-23 18:06:45.002500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.519 [2024-07-23 18:06:45.002572] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:37.519 [2024-07-23 18:06:45.002587] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:37.519 [2024-07-23 18:06:45.003506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:37.519 [2024-07-23 18:06:45.003530] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:37.519 [2024-07-23 18:06:45.003582] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:37.519 [2024-07-23 18:06:45.006329] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.519 are Threshold: 0% 00:18:37.519 Life Percentage Used: 0% 00:18:37.519 Data Units Read: 0 00:18:37.519 Data Units Written: 0 00:18:37.519 Host Read Commands: 0 00:18:37.519 Host Write Commands: 0 00:18:37.519 Controller Busy Time: 0 minutes 00:18:37.519 Power Cycles: 0 00:18:37.519 Power On Hours: 0 hours 00:18:37.519 Unsafe Shutdowns: 0 00:18:37.519 Unrecoverable Media Errors: 0 00:18:37.519 Lifetime Error Log Entries: 0 00:18:37.519 Warning Temperature Time: 0 minutes 00:18:37.519 Critical Temperature Time: 0 minutes 00:18:37.519 00:18:37.519 Number of Queues 00:18:37.519 ================ 00:18:37.519 Number of I/O Submission Queues: 127 00:18:37.519 Number of I/O Completion Queues: 127 00:18:37.519 00:18:37.519 Active Namespaces 00:18:37.519 ================= 00:18:37.519 Namespace ID:1 00:18:37.519 Error Recovery Timeout: Unlimited 00:18:37.519 Command Set Identifier: NVM (00h) 00:18:37.519 Deallocate: Supported 00:18:37.519 Deallocated/Unwritten Error: Not Supported 00:18:37.519 Deallocated Read Value: Unknown 00:18:37.519 Deallocate in Write Zeroes: Not Supported 00:18:37.519 Deallocated Guard Field: 0xFFFF 00:18:37.519 Flush: Supported 00:18:37.519 Reservation: Supported 00:18:37.519 Namespace Sharing Capabilities: Multiple Controllers 00:18:37.519 Size (in LBAs): 131072 (0GiB) 00:18:37.519 Capacity (in LBAs): 131072 (0GiB) 00:18:37.519 Utilization (in LBAs): 131072 (0GiB) 00:18:37.519 NGUID: 6643FE2615084F2F84AF164FBC8268F1 00:18:37.519 UUID: 6643fe26-1508-4f2f-84af-164fbc8268f1 00:18:37.519 Thin Provisioning: Not Supported 00:18:37.519 Per-NS Atomic Units: Yes 00:18:37.519 Atomic Boundary Size (Normal): 0 00:18:37.519 Atomic Boundary Size (PFail): 0 00:18:37.519 Atomic Boundary Offset: 0 00:18:37.519 Maximum Single Source Range Length: 65535 00:18:37.519 Maximum Copy Length: 65535 00:18:37.519 Maximum Source Range Count: 1 00:18:37.519 NGUID/EUI64 Never Reused: No 00:18:37.519 Namespace Write Protected: No 00:18:37.519 Number of LBA Formats: 1 00:18:37.519 Current LBA Format: LBA Format #00 00:18:37.519 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:37.519 00:18:37.519 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:37.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.777 [2024-07-23 18:06:45.236107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.042 Initializing NVMe Controllers 00:18:43.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:43.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:43.042 Initialization complete. Launching workers. 00:18:43.042 ======================================================== 00:18:43.042 Latency(us) 00:18:43.042 Device Information : IOPS MiB/s Average min max 00:18:43.042 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34670.83 135.43 3691.11 1170.44 9004.11 00:18:43.042 ======================================================== 00:18:43.042 Total : 34670.83 135.43 3691.11 1170.44 9004.11 00:18:43.042 00:18:43.042 [2024-07-23 18:06:50.338668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.042 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:43.042 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.042 [2024-07-23 18:06:50.571340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.304 Initializing NVMe Controllers 00:18:48.304 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.304 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:48.304 Initialization complete. Launching workers. 00:18:48.304 ======================================================== 00:18:48.304 Latency(us) 00:18:48.304 Device Information : IOPS MiB/s Average min max 00:18:48.304 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31962.30 124.85 4004.52 1218.75 9005.16 00:18:48.304 ======================================================== 00:18:48.304 Total : 31962.30 124.85 4004.52 1218.75 9005.16 00:18:48.304 00:18:48.304 [2024-07-23 18:06:55.594188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.304 18:06:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:48.304 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.304 [2024-07-23 18:06:55.813278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.572 [2024-07-23 18:07:00.938476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.572 Initializing NVMe Controllers 00:18:53.572 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.572 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:53.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:53.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:53.572 Initialization complete. Launching workers. 00:18:53.572 Starting thread on core 2 00:18:53.572 Starting thread on core 3 00:18:53.572 Starting thread on core 1 00:18:53.572 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:53.572 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.831 [2024-07-23 18:07:01.238856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.010 [2024-07-23 18:07:04.836775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.010 Initializing NVMe Controllers 00:18:58.010 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.010 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.010 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:58.010 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:58.010 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:58.010 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:58.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:58.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:58.011 Initialization complete. Launching workers. 00:18:58.011 Starting thread on core 1 with urgent priority queue 00:18:58.011 Starting thread on core 2 with urgent priority queue 00:18:58.011 Starting thread on core 3 with urgent priority queue 00:18:58.011 Starting thread on core 0 with urgent priority queue 00:18:58.011 SPDK bdev Controller (SPDK2 ) core 0: 4768.00 IO/s 20.97 secs/100000 ios 00:18:58.011 SPDK bdev Controller (SPDK2 ) core 1: 4260.00 IO/s 23.47 secs/100000 ios 00:18:58.011 SPDK bdev Controller (SPDK2 ) core 2: 4663.00 IO/s 21.45 secs/100000 ios 00:18:58.011 SPDK bdev Controller (SPDK2 ) core 3: 4401.67 IO/s 22.72 secs/100000 ios 00:18:58.011 ======================================================== 00:18:58.011 00:18:58.011 18:07:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:58.011 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.011 [2024-07-23 18:07:05.129829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.011 Initializing NVMe Controllers 00:18:58.011 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.011 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.011 Namespace ID: 1 size: 0GB 00:18:58.011 Initialization complete. 00:18:58.011 INFO: using host memory buffer for IO 00:18:58.011 Hello world! 00:18:58.011 [2024-07-23 18:07:05.140032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.011 18:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:58.011 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.011 [2024-07-23 18:07:05.434692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.943 Initializing NVMe Controllers 00:18:58.943 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.943 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.943 Initialization complete. Launching workers. 00:18:58.943 submit (in ns) avg, min, max = 7479.0, 3503.3, 4020788.9 00:18:58.943 complete (in ns) avg, min, max = 25155.4, 2075.6, 4016263.3 00:18:58.943 00:18:58.943 Submit histogram 00:18:58.943 ================ 00:18:58.943 Range in us Cumulative Count 00:18:58.943 3.484 - 3.508: 0.0149% ( 2) 00:18:58.943 3.508 - 3.532: 1.1835% ( 157) 00:18:58.943 3.532 - 3.556: 2.4786% ( 174) 00:18:58.943 3.556 - 3.579: 6.8701% ( 590) 00:18:58.943 3.579 - 3.603: 14.3580% ( 1006) 00:18:58.943 3.603 - 3.627: 24.1161% ( 1311) 00:18:58.943 3.627 - 3.650: 32.9587% ( 1188) 00:18:58.943 3.650 - 3.674: 40.7071% ( 1041) 00:18:58.943 3.674 - 3.698: 47.3837% ( 897) 00:18:58.943 3.698 - 3.721: 54.7525% ( 990) 00:18:58.943 3.721 - 3.745: 60.3573% ( 753) 00:18:58.943 3.745 - 3.769: 64.6744% ( 580) 00:18:58.943 3.769 - 3.793: 68.2694% ( 483) 00:18:58.943 3.793 - 3.816: 71.4403% ( 426) 00:18:58.943 3.816 - 3.840: 74.7525% ( 445) 00:18:58.943 3.840 - 3.864: 78.2583% ( 471) 00:18:58.943 3.864 - 3.887: 81.4440% ( 428) 00:18:58.943 3.887 - 3.911: 84.2799% ( 381) 00:18:58.943 3.911 - 3.935: 86.7510% ( 332) 00:18:58.943 3.935 - 3.959: 88.3960% ( 221) 00:18:58.943 3.959 - 3.982: 90.0484% ( 222) 00:18:58.943 3.982 - 4.006: 91.6487% ( 215) 00:18:58.943 4.006 - 4.030: 92.6982% ( 141) 00:18:58.943 4.030 - 4.053: 93.6956% ( 134) 00:18:58.943 4.053 - 4.077: 94.4473% ( 101) 00:18:58.943 4.077 - 4.101: 95.0949% ( 87) 00:18:58.943 4.101 - 4.124: 95.5936% ( 67) 00:18:58.943 4.124 - 4.148: 95.9509% ( 48) 00:18:58.943 4.148 - 4.172: 96.2188% ( 36) 00:18:58.943 4.172 - 4.196: 96.3603% ( 19) 00:18:58.943 4.196 - 4.219: 96.4570% ( 13) 00:18:58.943 4.219 - 4.243: 96.5687% ( 15) 00:18:58.943 4.243 - 4.267: 96.7250% ( 21) 00:18:58.943 4.267 - 4.290: 96.7920% ( 9) 00:18:58.943 4.290 - 4.314: 96.8292% ( 5) 00:18:58.943 4.314 - 4.338: 96.9259% ( 13) 00:18:58.943 4.338 - 4.361: 97.0376% ( 15) 00:18:58.943 4.361 - 4.385: 97.0748% ( 5) 00:18:58.943 4.385 - 4.409: 97.1120% ( 5) 00:18:58.943 4.409 - 4.433: 97.1492% ( 5) 00:18:58.943 4.433 - 4.456: 97.2162% ( 9) 00:18:58.943 4.456 - 4.480: 97.2237% ( 1) 00:18:58.943 4.480 - 4.504: 97.2534% ( 4) 00:18:58.943 4.504 - 4.527: 97.2683% ( 2) 00:18:58.943 4.527 - 4.551: 97.2981% ( 4) 00:18:58.943 4.551 - 4.575: 97.3130% ( 2) 00:18:58.943 4.575 - 4.599: 97.3204% ( 1) 00:18:58.943 4.599 - 4.622: 97.3353% ( 2) 00:18:58.943 4.670 - 4.693: 97.3576% ( 3) 00:18:58.943 4.717 - 4.741: 97.3725% ( 2) 00:18:58.943 4.764 - 4.788: 97.3800% ( 1) 00:18:58.943 4.788 - 4.812: 97.3949% ( 2) 00:18:58.943 4.812 - 4.836: 97.4321% ( 5) 00:18:58.943 4.836 - 4.859: 97.4693% ( 5) 00:18:58.943 4.859 - 4.883: 97.4916% ( 3) 00:18:58.943 4.883 - 4.907: 97.5140% ( 3) 00:18:58.943 4.907 - 4.930: 97.6107% ( 13) 00:18:58.943 4.930 - 4.954: 97.6852% ( 10) 00:18:58.943 4.954 - 4.978: 97.7224% ( 5) 00:18:58.943 4.978 - 5.001: 97.7968% ( 10) 00:18:58.943 5.001 - 5.025: 97.8340% ( 5) 00:18:58.943 5.025 - 5.049: 97.9159% ( 11) 00:18:58.943 5.049 - 5.073: 97.9531% ( 5) 00:18:58.943 5.073 - 5.096: 98.0424% ( 12) 00:18:58.943 5.096 - 5.120: 98.0648% ( 3) 00:18:58.943 5.120 - 5.144: 98.1169% ( 7) 00:18:58.943 5.144 - 5.167: 98.1690% ( 7) 00:18:58.943 5.167 - 5.191: 98.2062% ( 5) 00:18:58.943 5.191 - 5.215: 98.2583% ( 7) 00:18:58.943 5.215 - 5.239: 98.2657% ( 1) 00:18:58.944 5.239 - 5.262: 98.2955% ( 4) 00:18:58.944 5.262 - 5.286: 98.3104% ( 2) 00:18:58.944 5.310 - 5.333: 98.3327% ( 3) 00:18:58.944 5.333 - 5.357: 98.3550% ( 3) 00:18:58.944 5.357 - 5.381: 98.3699% ( 2) 00:18:58.944 5.428 - 5.452: 98.3774% ( 1) 00:18:58.944 5.452 - 5.476: 98.3848% ( 1) 00:18:58.944 5.476 - 5.499: 98.3997% ( 2) 00:18:58.944 5.499 - 5.523: 98.4146% ( 2) 00:18:58.944 5.594 - 5.618: 98.4220% ( 1) 00:18:58.944 5.618 - 5.641: 98.4369% ( 2) 00:18:58.944 5.641 - 5.665: 98.4667% ( 4) 00:18:58.944 5.713 - 5.736: 98.4816% ( 2) 00:18:58.944 5.736 - 5.760: 98.4965% ( 2) 00:18:58.944 5.760 - 5.784: 98.5114% ( 2) 00:18:58.944 5.784 - 5.807: 98.5188% ( 1) 00:18:58.944 5.879 - 5.902: 98.5262% ( 1) 00:18:58.944 6.021 - 6.044: 98.5337% ( 1) 00:18:58.944 6.068 - 6.116: 98.5486% ( 2) 00:18:58.944 6.258 - 6.305: 98.5560% ( 1) 00:18:58.944 6.353 - 6.400: 98.5635% ( 1) 00:18:58.944 6.542 - 6.590: 98.5783% ( 2) 00:18:58.944 6.732 - 6.779: 98.5858% ( 1) 00:18:58.944 6.921 - 6.969: 98.6007% ( 2) 00:18:58.944 7.064 - 7.111: 98.6156% ( 2) 00:18:58.944 7.206 - 7.253: 98.6304% ( 2) 00:18:58.944 7.443 - 7.490: 98.6379% ( 1) 00:18:58.944 7.585 - 7.633: 98.6453% ( 1) 00:18:58.944 7.822 - 7.870: 98.6528% ( 1) 00:18:58.944 7.964 - 8.012: 98.6602% ( 1) 00:18:58.944 8.107 - 8.154: 98.6677% ( 1) 00:18:58.944 8.154 - 8.201: 98.6751% ( 1) 00:18:58.944 8.486 - 8.533: 98.6825% ( 1) 00:18:58.944 8.581 - 8.628: 98.7049% ( 3) 00:18:58.944 8.676 - 8.723: 98.7123% ( 1) 00:18:58.944 8.723 - 8.770: 98.7272% ( 2) 00:18:58.944 8.770 - 8.818: 98.7346% ( 1) 00:18:58.944 8.865 - 8.913: 98.7421% ( 1) 00:18:58.944 8.913 - 8.960: 98.7495% ( 1) 00:18:58.944 8.960 - 9.007: 98.7570% ( 1) 00:18:58.944 9.007 - 9.055: 98.7644% ( 1) 00:18:58.944 9.339 - 9.387: 98.7719% ( 1) 00:18:58.944 9.387 - 9.434: 98.7942% ( 3) 00:18:58.944 9.434 - 9.481: 98.8091% ( 2) 00:18:58.944 9.481 - 9.529: 98.8240% ( 2) 00:18:58.944 9.624 - 9.671: 98.8314% ( 1) 00:18:58.944 9.671 - 9.719: 98.8463% ( 2) 00:18:58.944 9.813 - 9.861: 98.8612% ( 2) 00:18:58.944 9.956 - 10.003: 98.8686% ( 1) 00:18:58.944 10.098 - 10.145: 98.8761% ( 1) 00:18:58.944 10.145 - 10.193: 98.8835% ( 1) 00:18:58.944 10.193 - 10.240: 98.9058% ( 3) 00:18:58.944 10.240 - 10.287: 98.9207% ( 2) 00:18:58.944 10.287 - 10.335: 98.9282% ( 1) 00:18:58.944 10.524 - 10.572: 98.9356% ( 1) 00:18:58.944 10.667 - 10.714: 98.9431% ( 1) 00:18:58.944 10.714 - 10.761: 98.9505% ( 1) 00:18:58.944 10.761 - 10.809: 98.9579% ( 1) 00:18:58.944 10.904 - 10.951: 98.9654% ( 1) 00:18:58.944 10.951 - 10.999: 98.9728% ( 1) 00:18:58.944 11.093 - 11.141: 98.9803% ( 1) 00:18:58.944 11.236 - 11.283: 98.9952% ( 2) 00:18:58.944 11.425 - 11.473: 99.0026% ( 1) 00:18:58.944 11.520 - 11.567: 99.0100% ( 1) 00:18:58.944 11.994 - 12.041: 99.0175% ( 1) 00:18:58.944 12.041 - 12.089: 99.0249% ( 1) 00:18:58.944 12.231 - 12.326: 99.0324% ( 1) 00:18:58.944 12.326 - 12.421: 99.0473% ( 2) 00:18:58.944 13.274 - 13.369: 99.0547% ( 1) 00:18:58.944 13.369 - 13.464: 99.0622% ( 1) 00:18:58.944 13.559 - 13.653: 99.0696% ( 1) 00:18:58.944 13.748 - 13.843: 99.0919% ( 3) 00:18:58.944 13.843 - 13.938: 99.1143% ( 3) 00:18:58.944 13.938 - 14.033: 99.1291% ( 2) 00:18:58.944 14.127 - 14.222: 99.1366% ( 1) 00:18:58.944 14.222 - 14.317: 99.1440% ( 1) 00:18:58.944 14.412 - 14.507: 99.1515% ( 1) 00:18:58.944 14.507 - 14.601: 99.1589% ( 1) 00:18:58.944 14.696 - 14.791: 99.1738% ( 2) 00:18:58.944 14.981 - 15.076: 99.1812% ( 1) 00:18:58.944 15.455 - 15.550: 99.1887% ( 1) 00:18:58.944 15.929 - 16.024: 99.1961% ( 1) 00:18:58.944 17.161 - 17.256: 99.2036% ( 1) 00:18:58.944 17.256 - 17.351: 99.2110% ( 1) 00:18:58.944 17.351 - 17.446: 99.2333% ( 3) 00:18:58.944 17.446 - 17.541: 99.2631% ( 4) 00:18:58.944 17.541 - 17.636: 99.3078% ( 6) 00:18:58.944 17.636 - 17.730: 99.3450% ( 5) 00:18:58.944 17.730 - 17.825: 99.3599% ( 2) 00:18:58.944 17.825 - 17.920: 99.3971% ( 5) 00:18:58.944 17.920 - 18.015: 99.4492% ( 7) 00:18:58.944 18.015 - 18.110: 99.5013% ( 7) 00:18:58.944 18.110 - 18.204: 99.5683% ( 9) 00:18:58.944 18.204 - 18.299: 99.6278% ( 8) 00:18:58.944 18.299 - 18.394: 99.6502% ( 3) 00:18:58.944 18.394 - 18.489: 99.7097% ( 8) 00:18:58.944 18.489 - 18.584: 99.7395% ( 4) 00:18:58.944 18.584 - 18.679: 99.7767% ( 5) 00:18:58.944 18.679 - 18.773: 99.8214% ( 6) 00:18:58.944 18.773 - 18.868: 99.8511% ( 4) 00:18:58.944 18.868 - 18.963: 99.8586% ( 1) 00:18:58.944 19.627 - 19.721: 99.8660% ( 1) 00:18:58.944 19.816 - 19.911: 99.8735% ( 1) 00:18:58.944 19.911 - 20.006: 99.8809% ( 1) 00:18:58.944 21.333 - 21.428: 99.8884% ( 1) 00:18:58.944 22.850 - 22.945: 99.8958% ( 1) 00:18:58.944 23.609 - 23.704: 99.9032% ( 1) 00:18:58.944 24.462 - 24.652: 99.9107% ( 1) 00:18:58.944 3980.705 - 4004.978: 99.9702% ( 8) 00:18:58.944 4004.978 - 4029.250: 100.0000% ( 4) 00:18:58.944 00:18:58.944 Complete histogram 00:18:58.944 ================== 00:18:58.944 Range in us Cumulative Count 00:18:58.944 2.074 - 2.086: 1.0569% ( 142) 00:18:58.944 2.086 - 2.098: 28.5300% ( 3691) 00:18:58.944 2.098 - 2.110: 46.3565% ( 2395) 00:18:58.944 2.110 - 2.121: 48.9542% ( 349) 00:18:58.944 2.121 - 2.133: 59.0919% ( 1362) 00:18:58.944 2.133 - 2.145: 62.7391% ( 490) 00:18:58.944 2.145 - 2.157: 65.7983% ( 411) 00:18:58.944 2.157 - 2.169: 77.3576% ( 1553) 00:18:58.944 2.169 - 2.181: 80.7443% ( 455) 00:18:58.944 2.181 - 2.193: 82.6349% ( 254) 00:18:58.944 2.193 - 2.204: 87.8824% ( 705) 00:18:58.944 2.204 - 2.216: 89.1701% ( 173) 00:18:58.944 2.216 - 2.228: 89.6762% ( 68) 00:18:58.944 2.228 - 2.240: 90.9341% ( 169) 00:18:58.944 2.240 - 2.252: 92.8247% ( 254) 00:18:58.944 2.252 - 2.264: 94.2389% ( 190) 00:18:58.944 2.264 - 2.276: 94.6111% ( 50) 00:18:58.944 2.276 - 2.287: 94.7674% ( 21) 00:18:58.944 2.287 - 2.299: 94.8418% ( 10) 00:18:58.944 2.299 - 2.311: 94.9163% ( 10) 00:18:58.944 2.311 - 2.323: 95.2289% ( 42) 00:18:58.944 2.323 - 2.335: 95.4968% ( 36) 00:18:58.944 2.335 - 2.347: 95.5489% ( 7) 00:18:58.944 2.347 - 2.359: 95.6085% ( 8) 00:18:58.944 2.359 - 2.370: 95.7127% ( 14) 00:18:58.944 2.370 - 2.382: 95.8913% ( 24) 00:18:58.944 2.382 - 2.394: 96.1891% ( 40) 00:18:58.944 2.394 - 2.406: 96.6282% ( 59) 00:18:58.944 2.406 - 2.418: 96.9185% ( 39) 00:18:58.944 2.418 - 2.430: 97.2088% ( 39) 00:18:58.944 2.430 - 2.441: 97.4842% ( 37) 00:18:58.944 2.441 - 2.453: 97.6107% ( 17) 00:18:58.944 2.453 - 2.465: 97.7000% ( 12) 00:18:58.944 2.465 - 2.477: 97.7819% ( 11) 00:18:58.944 2.477 - 2.489: 97.8415% ( 8) 00:18:58.944 2.489 - 2.501: 97.9010% ( 8) 00:18:58.944 2.501 - 2.513: 97.9606% ( 8) 00:18:58.944 2.513 - 2.524: 98.0052% ( 6) 00:18:58.944 2.524 - 2.536: 98.0275% ( 3) 00:18:58.944 2.536 - 2.548: 98.0350% ( 1) 00:18:58.944 2.548 - 2.560: 98.0499% ( 2) 00:18:58.944 2.560 - 2.572: 98.0796% ( 4) 00:18:58.944 2.572 - 2.584: 98.1020% ( 3) 00:18:58.944 2.584 - 2.596: 98.1169% ( 2) 00:18:58.944 2.596 - 2.607: 98.1317% ( 2) 00:18:58.944 2.631 - 2.643: 98.1466% ( 2) 00:18:58.944 2.643 - 2.655: 98.1764% ( 4) 00:18:58.944 2.655 - 2.667: 98.1913% ( 2) 00:18:58.944 2.667 - 2.679: 98.2211% ( 4) 00:18:58.944 2.679 - 2.690: 98.2285% ( 1) 00:18:58.944 2.726 - 2.738: 98.2360% ( 1) 00:18:58.944 2.738 - 2.750: 98.2434% ( 1) 00:18:58.944 2.773 - 2.785: 98.2583% ( 2) 00:18:58.944 2.785 - 2.797: 98.2657% ( 1) 00:18:58.944 2.797 - 2.809: 98.2881% ( 3) 00:18:58.944 2.809 - 2.821: 98.3029% ( 2) 00:18:58.944 2.821 - 2.833: 98.3104% ( 1) 00:18:58.944 2.844 - 2.856: 98.3253% ( 2) 00:18:58.944 2.868 - 2.880: 98.3476% ( 3) 00:18:58.944 2.880 - 2.892: 98.3550% ( 1) 00:18:58.944 2.892 - 2.904: 98.3699% ( 2) 00:18:58.944 2.927 - 2.939: 98.3774% ( 1) 00:18:58.944 2.975 - 2.987: 98.3848% ( 1) 00:18:58.944 2.987 - 2.999: 98.4071% ( 3) 00:18:58.944 2.999 - 3.010: 98.4146% ( 1) 00:18:58.944 3.034 - 3.058: 98.4369% ( 3) 00:18:58.944 3.058 - 3.081: 98.4667% ( 4) 00:18:58.944 3.081 - 3.105: 98.4741% ( 1) 00:18:58.944 3.105 - 3.129: 98.4816% ( 1) 00:18:58.944 3.129 - 3.153: 98.4890% ( 1) 00:18:58.945 3.153 - 3.176: 98.5039% ( 2) 00:18:58.945 3.200 - 3.224: 9[2024-07-23 18:07:06.529232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.945 8.5114% ( 1) 00:18:58.945 3.224 - 3.247: 98.5188% ( 1) 00:18:58.945 3.319 - 3.342: 98.5262% ( 1) 00:18:58.945 3.342 - 3.366: 98.5337% ( 1) 00:18:58.945 3.366 - 3.390: 98.5411% ( 1) 00:18:58.945 3.390 - 3.413: 98.5486% ( 1) 00:18:58.945 3.413 - 3.437: 98.5560% ( 1) 00:18:58.945 3.437 - 3.461: 98.5635% ( 1) 00:18:58.945 3.461 - 3.484: 98.5783% ( 2) 00:18:58.945 3.484 - 3.508: 98.5932% ( 2) 00:18:58.945 3.508 - 3.532: 98.6081% ( 2) 00:18:58.945 3.532 - 3.556: 98.6156% ( 1) 00:18:58.945 3.556 - 3.579: 98.6379% ( 3) 00:18:58.945 3.579 - 3.603: 98.6453% ( 1) 00:18:58.945 3.603 - 3.627: 98.6528% ( 1) 00:18:58.945 3.627 - 3.650: 98.6677% ( 2) 00:18:58.945 3.650 - 3.674: 98.6751% ( 1) 00:18:58.945 3.674 - 3.698: 98.6825% ( 1) 00:18:58.945 3.745 - 3.769: 98.6900% ( 1) 00:18:58.945 3.793 - 3.816: 98.7049% ( 2) 00:18:58.945 3.816 - 3.840: 98.7272% ( 3) 00:18:58.945 3.840 - 3.864: 98.7346% ( 1) 00:18:58.945 3.864 - 3.887: 98.7421% ( 1) 00:18:58.945 3.887 - 3.911: 98.7570% ( 2) 00:18:58.945 3.982 - 4.006: 98.7719% ( 2) 00:18:58.945 4.006 - 4.030: 98.7793% ( 1) 00:18:58.945 4.053 - 4.077: 98.7868% ( 1) 00:18:58.945 4.172 - 4.196: 98.7942% ( 1) 00:18:58.945 4.196 - 4.219: 98.8016% ( 1) 00:18:58.945 4.433 - 4.456: 98.8091% ( 1) 00:18:58.945 4.741 - 4.764: 98.8165% ( 1) 00:18:58.945 5.404 - 5.428: 98.8240% ( 1) 00:18:58.945 5.855 - 5.879: 98.8314% ( 1) 00:18:58.945 6.258 - 6.305: 98.8389% ( 1) 00:18:58.945 6.495 - 6.542: 98.8463% ( 1) 00:18:58.945 6.590 - 6.637: 98.8537% ( 1) 00:18:58.945 6.969 - 7.016: 98.8612% ( 1) 00:18:58.945 7.016 - 7.064: 98.8686% ( 1) 00:18:58.945 7.064 - 7.111: 98.8761% ( 1) 00:18:58.945 7.159 - 7.206: 98.8910% ( 2) 00:18:58.945 7.301 - 7.348: 98.8984% ( 1) 00:18:58.945 7.396 - 7.443: 98.9058% ( 1) 00:18:58.945 7.538 - 7.585: 98.9133% ( 1) 00:18:58.945 7.870 - 7.917: 98.9207% ( 1) 00:18:58.945 8.154 - 8.201: 98.9356% ( 2) 00:18:58.945 8.628 - 8.676: 98.9431% ( 1) 00:18:58.945 9.434 - 9.481: 98.9505% ( 1) 00:18:58.945 15.455 - 15.550: 98.9579% ( 1) 00:18:58.945 15.550 - 15.644: 98.9654% ( 1) 00:18:58.945 15.834 - 15.929: 99.0100% ( 6) 00:18:58.945 15.929 - 16.024: 99.0324% ( 3) 00:18:58.945 16.024 - 16.119: 99.0547% ( 3) 00:18:58.945 16.119 - 16.213: 99.0845% ( 4) 00:18:58.945 16.213 - 16.308: 99.1143% ( 4) 00:18:58.945 16.308 - 16.403: 99.1291% ( 2) 00:18:58.945 16.403 - 16.498: 99.1440% ( 2) 00:18:58.945 16.498 - 16.593: 99.2185% ( 10) 00:18:58.945 16.593 - 16.687: 99.2482% ( 4) 00:18:58.945 16.687 - 16.782: 99.2706% ( 3) 00:18:58.945 16.782 - 16.877: 99.2854% ( 2) 00:18:58.945 16.877 - 16.972: 99.3078% ( 3) 00:18:58.945 16.972 - 17.067: 99.3227% ( 2) 00:18:58.945 17.067 - 17.161: 99.3376% ( 2) 00:18:58.945 17.161 - 17.256: 99.3524% ( 2) 00:18:58.945 17.256 - 17.351: 99.3673% ( 2) 00:18:58.945 17.446 - 17.541: 99.3748% ( 1) 00:18:58.945 17.541 - 17.636: 99.3897% ( 2) 00:18:58.945 17.920 - 18.015: 99.3971% ( 1) 00:18:58.945 18.679 - 18.773: 99.4045% ( 1) 00:18:58.945 18.773 - 18.868: 99.4120% ( 1) 00:18:58.945 19.247 - 19.342: 99.4194% ( 1) 00:18:58.945 49.683 - 50.062: 99.4269% ( 1) 00:18:58.945 3980.705 - 4004.978: 99.9181% ( 66) 00:18:58.945 4004.978 - 4029.250: 100.0000% ( 11) 00:18:58.945 00:18:58.945 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:58.945 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:58.945 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:58.945 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:58.945 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:59.202 [ 00:18:59.202 { 00:18:59.202 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:59.202 "subtype": "Discovery", 00:18:59.202 "listen_addresses": [], 00:18:59.202 "allow_any_host": true, 00:18:59.202 "hosts": [] 00:18:59.202 }, 00:18:59.202 { 00:18:59.202 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:59.203 "subtype": "NVMe", 00:18:59.203 "listen_addresses": [ 00:18:59.203 { 00:18:59.203 "trtype": "VFIOUSER", 00:18:59.203 "adrfam": "IPv4", 00:18:59.203 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:59.203 "trsvcid": "0" 00:18:59.203 } 00:18:59.203 ], 00:18:59.203 "allow_any_host": true, 00:18:59.203 "hosts": [], 00:18:59.203 "serial_number": "SPDK1", 00:18:59.203 "model_number": "SPDK bdev Controller", 00:18:59.203 "max_namespaces": 32, 00:18:59.203 "min_cntlid": 1, 00:18:59.203 "max_cntlid": 65519, 00:18:59.203 "namespaces": [ 00:18:59.203 { 00:18:59.203 "nsid": 1, 00:18:59.203 "bdev_name": "Malloc1", 00:18:59.203 "name": "Malloc1", 00:18:59.203 "nguid": "17C6F7DFD1E04290919C62AC064F72B1", 00:18:59.203 "uuid": "17c6f7df-d1e0-4290-919c-62ac064f72b1" 00:18:59.203 }, 00:18:59.203 { 00:18:59.203 "nsid": 2, 00:18:59.203 "bdev_name": "Malloc3", 00:18:59.203 "name": "Malloc3", 00:18:59.203 "nguid": "B062EA41A7874F129763FB0B3D610ADA", 00:18:59.203 "uuid": "b062ea41-a787-4f12-9763-fb0b3d610ada" 00:18:59.203 } 00:18:59.203 ] 00:18:59.203 }, 00:18:59.203 { 00:18:59.203 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:59.203 "subtype": "NVMe", 00:18:59.203 "listen_addresses": [ 00:18:59.203 { 00:18:59.203 "trtype": "VFIOUSER", 00:18:59.203 "adrfam": "IPv4", 00:18:59.203 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:59.203 "trsvcid": "0" 00:18:59.203 } 00:18:59.203 ], 00:18:59.203 "allow_any_host": true, 00:18:59.203 "hosts": [], 00:18:59.203 "serial_number": "SPDK2", 00:18:59.203 "model_number": "SPDK bdev Controller", 00:18:59.203 "max_namespaces": 32, 00:18:59.203 "min_cntlid": 1, 00:18:59.203 "max_cntlid": 65519, 00:18:59.203 "namespaces": [ 00:18:59.203 { 00:18:59.203 "nsid": 1, 00:18:59.203 "bdev_name": "Malloc2", 00:18:59.203 "name": "Malloc2", 00:18:59.203 "nguid": "6643FE2615084F2F84AF164FBC8268F1", 00:18:59.203 "uuid": "6643fe26-1508-4f2f-84af-164fbc8268f1" 00:18:59.203 } 00:18:59.203 ] 00:18:59.203 } 00:18:59.203 ] 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2350029 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:59.203 18:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:59.461 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.461 [2024-07-23 18:07:06.979796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.461 Malloc4 00:18:59.719 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:59.719 [2024-07-23 18:07:07.349851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.719 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:59.976 Asynchronous Event Request test 00:18:59.976 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.976 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.976 Registering asynchronous event callbacks... 00:18:59.976 Starting namespace attribute notice tests for all controllers... 00:18:59.976 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:59.976 aer_cb - Changed Namespace 00:18:59.976 Cleaning up... 00:18:59.976 [ 00:18:59.976 { 00:18:59.976 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:59.976 "subtype": "Discovery", 00:18:59.976 "listen_addresses": [], 00:18:59.976 "allow_any_host": true, 00:18:59.976 "hosts": [] 00:18:59.976 }, 00:18:59.976 { 00:18:59.976 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:59.976 "subtype": "NVMe", 00:18:59.976 "listen_addresses": [ 00:18:59.976 { 00:18:59.976 "trtype": "VFIOUSER", 00:18:59.976 "adrfam": "IPv4", 00:18:59.976 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:59.976 "trsvcid": "0" 00:18:59.976 } 00:18:59.976 ], 00:18:59.976 "allow_any_host": true, 00:18:59.976 "hosts": [], 00:18:59.976 "serial_number": "SPDK1", 00:18:59.976 "model_number": "SPDK bdev Controller", 00:18:59.976 "max_namespaces": 32, 00:18:59.976 "min_cntlid": 1, 00:18:59.976 "max_cntlid": 65519, 00:18:59.976 "namespaces": [ 00:18:59.976 { 00:18:59.976 "nsid": 1, 00:18:59.976 "bdev_name": "Malloc1", 00:18:59.976 "name": "Malloc1", 00:18:59.976 "nguid": "17C6F7DFD1E04290919C62AC064F72B1", 00:18:59.976 "uuid": "17c6f7df-d1e0-4290-919c-62ac064f72b1" 00:18:59.976 }, 00:18:59.976 { 00:18:59.976 "nsid": 2, 00:18:59.976 "bdev_name": "Malloc3", 00:18:59.976 "name": "Malloc3", 00:18:59.976 "nguid": "B062EA41A7874F129763FB0B3D610ADA", 00:18:59.976 "uuid": "b062ea41-a787-4f12-9763-fb0b3d610ada" 00:18:59.976 } 00:18:59.976 ] 00:18:59.976 }, 00:18:59.976 { 00:18:59.976 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:59.976 "subtype": "NVMe", 00:18:59.976 "listen_addresses": [ 00:18:59.976 { 00:18:59.976 "trtype": "VFIOUSER", 00:18:59.976 "adrfam": "IPv4", 00:18:59.976 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:59.976 "trsvcid": "0" 00:18:59.976 } 00:18:59.976 ], 00:18:59.976 "allow_any_host": true, 00:18:59.976 "hosts": [], 00:18:59.976 "serial_number": "SPDK2", 00:18:59.976 "model_number": "SPDK bdev Controller", 00:18:59.976 "max_namespaces": 32, 00:18:59.976 "min_cntlid": 1, 00:18:59.976 "max_cntlid": 65519, 00:18:59.976 "namespaces": [ 00:18:59.976 { 00:18:59.976 "nsid": 1, 00:18:59.976 "bdev_name": "Malloc2", 00:18:59.976 "name": "Malloc2", 00:18:59.976 "nguid": "6643FE2615084F2F84AF164FBC8268F1", 00:18:59.976 "uuid": "6643fe26-1508-4f2f-84af-164fbc8268f1" 00:18:59.976 }, 00:18:59.976 { 00:18:59.976 "nsid": 2, 00:18:59.976 "bdev_name": "Malloc4", 00:18:59.976 "name": "Malloc4", 00:18:59.976 "nguid": "F91F246BC75D498BB1B283C042CB5F50", 00:18:59.976 "uuid": "f91f246b-c75d-498b-b1b2-83c042cb5f50" 00:18:59.976 } 00:18:59.976 ] 00:18:59.976 } 00:18:59.976 ] 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2350029 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2344440 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2344440 ']' 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2344440 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.976 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344440 00:19:00.233 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:00.233 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:00.233 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344440' 00:19:00.233 killing process with pid 2344440 00:19:00.233 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2344440 00:19:00.233 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2344440 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2350170 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2350170' 00:19:00.491 Process pid: 2350170 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2350170 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2350170 ']' 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.491 18:07:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:00.491 [2024-07-23 18:07:07.990000] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:00.491 [2024-07-23 18:07:07.991023] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:19:00.491 [2024-07-23 18:07:07.991077] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.491 [2024-07-23 18:07:08.053513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.491 [2024-07-23 18:07:08.143048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.491 [2024-07-23 18:07:08.143118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.491 [2024-07-23 18:07:08.143146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.491 [2024-07-23 18:07:08.143158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.491 [2024-07-23 18:07:08.143168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.491 [2024-07-23 18:07:08.143222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.491 [2024-07-23 18:07:08.143294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.491 [2024-07-23 18:07:08.143359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.491 [2024-07-23 18:07:08.143363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.750 [2024-07-23 18:07:08.233429] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:00.750 [2024-07-23 18:07:08.233626] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:00.750 [2024-07-23 18:07:08.233923] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:00.750 [2024-07-23 18:07:08.234562] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:00.750 [2024-07-23 18:07:08.234843] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:00.750 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.750 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:19:00.750 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:01.681 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:01.938 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:01.939 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:01.939 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.939 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:01.939 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:02.196 Malloc1 00:19:02.196 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:02.454 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:02.712 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:02.969 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:02.969 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:02.969 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:03.228 Malloc2 00:19:03.228 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:03.485 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:03.743 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2350170 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2350170 ']' 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2350170 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2350170 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2350170' 00:19:04.000 killing process with pid 2350170 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2350170 00:19:04.000 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2350170 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:04.260 00:19:04.260 real 0m52.862s 00:19:04.260 user 3m29.013s 00:19:04.260 sys 0m4.245s 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:04.260 ************************************ 00:19:04.260 END TEST nvmf_vfio_user 00:19:04.260 ************************************ 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:04.260 ************************************ 00:19:04.260 START TEST nvmf_vfio_user_nvme_compliance 00:19:04.260 ************************************ 00:19:04.260 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:04.519 * Looking for test storage... 00:19:04.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.519 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2350763 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2350763' 00:19:04.520 Process pid: 2350763 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2350763 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2350763 ']' 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.520 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.520 [2024-07-23 18:07:12.031974] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:19:04.520 [2024-07-23 18:07:12.032072] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.520 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.520 [2024-07-23 18:07:12.095014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:04.778 [2024-07-23 18:07:12.181046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.778 [2024-07-23 18:07:12.181115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.778 [2024-07-23 18:07:12.181146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.778 [2024-07-23 18:07:12.181157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.778 [2024-07-23 18:07:12.181167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.778 [2024-07-23 18:07:12.184338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.778 [2024-07-23 18:07:12.184369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.778 [2024-07-23 18:07:12.184375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.778 18:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.778 18:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:19:04.778 18:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.712 malloc0 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.712 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.969 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:05.969 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.969 00:19:05.969 00:19:05.969 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.969 http://cunit.sourceforge.net/ 00:19:05.969 00:19:05.969 00:19:05.969 Suite: nvme_compliance 00:19:05.969 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 18:07:13.541891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.969 [2024-07-23 18:07:13.543411] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:05.969 [2024-07-23 18:07:13.543438] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:05.969 [2024-07-23 18:07:13.543452] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:05.969 [2024-07-23 18:07:13.544921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.969 passed 00:19:06.226 Test: admin_identify_ctrlr_verify_fused ...[2024-07-23 18:07:13.630490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.226 [2024-07-23 18:07:13.633511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.226 passed 00:19:06.226 Test: admin_identify_ns ...[2024-07-23 18:07:13.718909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.226 [2024-07-23 18:07:13.779339] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:06.226 [2024-07-23 18:07:13.787352] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:06.226 [2024-07-23 18:07:13.808461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.226 passed 00:19:06.484 Test: admin_get_features_mandatory_features ...[2024-07-23 18:07:13.892124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.484 [2024-07-23 18:07:13.895146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.484 passed 00:19:06.484 Test: admin_get_features_optional_features ...[2024-07-23 18:07:13.979723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.484 [2024-07-23 18:07:13.982744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.484 passed 00:19:06.484 Test: admin_set_features_number_of_queues ...[2024-07-23 18:07:14.065945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.742 [2024-07-23 18:07:14.171421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.742 passed 00:19:06.742 Test: admin_get_log_page_mandatory_logs ...[2024-07-23 18:07:14.252100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.742 [2024-07-23 18:07:14.257133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.742 passed 00:19:06.742 Test: admin_get_log_page_with_lpo ...[2024-07-23 18:07:14.340093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.000 [2024-07-23 18:07:14.408338] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:07.000 [2024-07-23 18:07:14.421414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.000 passed 00:19:07.000 Test: fabric_property_get ...[2024-07-23 18:07:14.505070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.000 [2024-07-23 18:07:14.506374] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:07.000 [2024-07-23 18:07:14.508095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.000 passed 00:19:07.000 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 18:07:14.592684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.000 [2024-07-23 18:07:14.593994] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:07.000 [2024-07-23 18:07:14.595709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.000 passed 00:19:07.258 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 18:07:14.680770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.258 [2024-07-23 18:07:14.764327] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:07.258 [2024-07-23 18:07:14.780342] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:07.258 [2024-07-23 18:07:14.785460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.258 passed 00:19:07.258 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 18:07:14.869230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.258 [2024-07-23 18:07:14.870538] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:07.258 [2024-07-23 18:07:14.872253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.258 passed 00:19:07.515 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 18:07:14.958230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.515 [2024-07-23 18:07:15.033330] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:07.515 [2024-07-23 18:07:15.057330] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:07.516 [2024-07-23 18:07:15.062436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.516 passed 00:19:07.516 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 18:07:15.147077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.516 [2024-07-23 18:07:15.148414] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:07.516 [2024-07-23 18:07:15.148458] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:07.516 [2024-07-23 18:07:15.150099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.774 passed 00:19:07.774 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 18:07:15.235664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.774 [2024-07-23 18:07:15.326328] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:07.774 [2024-07-23 18:07:15.334325] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:07.774 [2024-07-23 18:07:15.342343] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:07.774 [2024-07-23 18:07:15.350329] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:07.774 [2024-07-23 18:07:15.379429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.774 passed 00:19:08.032 Test: admin_create_io_sq_verify_pc ...[2024-07-23 18:07:15.465593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:08.032 [2024-07-23 18:07:15.480341] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:08.032 [2024-07-23 18:07:15.497999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:08.032 passed 00:19:08.032 Test: admin_create_io_qp_max_qps ...[2024-07-23 18:07:15.585610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.405 [2024-07-23 18:07:16.675361] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:09.405 [2024-07-23 18:07:17.056769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.662 passed 00:19:09.662 Test: admin_create_io_sq_shared_cq ...[2024-07-23 18:07:17.143227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.662 [2024-07-23 18:07:17.272326] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:09.663 [2024-07-23 18:07:17.309436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.920 passed 00:19:09.921 00:19:09.921 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.921 suites 1 1 n/a 0 0 00:19:09.921 tests 18 18 18 0 0 00:19:09.921 asserts 360 360 360 0 n/a 00:19:09.921 00:19:09.921 Elapsed time = 1.563 seconds 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2350763 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2350763 ']' 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2350763 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2350763 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2350763' 00:19:09.921 killing process with pid 2350763 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2350763 00:19:09.921 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2350763 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:10.179 00:19:10.179 real 0m5.719s 00:19:10.179 user 0m16.094s 00:19:10.179 sys 0m0.581s 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:10.179 ************************************ 00:19:10.179 END TEST nvmf_vfio_user_nvme_compliance 00:19:10.179 ************************************ 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:10.179 ************************************ 00:19:10.179 START TEST nvmf_vfio_user_fuzz 00:19:10.179 ************************************ 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:10.179 * Looking for test storage... 00:19:10.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.179 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2351479 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2351479' 00:19:10.180 Process pid: 2351479 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2351479 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2351479 ']' 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.180 18:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.444 18:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.444 18:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:19:10.444 18:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.404 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:11.663 malloc0 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:11.663 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:43.723 Fuzzing completed. Shutting down the fuzz application 00:19:43.723 00:19:43.723 Dumping successful admin opcodes: 00:19:43.723 8, 9, 10, 24, 00:19:43.723 Dumping successful io opcodes: 00:19:43.723 0, 00:19:43.723 NS: 0x200003a1ef00 I/O qp, Total commands completed: 656104, total successful commands: 2554, random_seed: 1501891648 00:19:43.723 NS: 0x200003a1ef00 admin qp, Total commands completed: 97846, total successful commands: 797, random_seed: 1134444800 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2351479 ']' 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2351479' 00:19:43.723 killing process with pid 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2351479 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:43.723 00:19:43.723 real 0m32.165s 00:19:43.723 user 0m33.179s 00:19:43.723 sys 0m25.463s 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:43.723 ************************************ 00:19:43.723 END TEST nvmf_vfio_user_fuzz 00:19:43.723 ************************************ 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.723 ************************************ 00:19:43.723 START TEST nvmf_auth_target 00:19:43.723 ************************************ 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.723 * Looking for test storage... 00:19:43.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.723 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.724 18:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:44.657 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:44.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:44.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:44.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:44.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.658 18:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:44.658 00:19:44.658 --- 10.0.0.2 ping statistics --- 00:19:44.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.658 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:44.658 00:19:44.658 --- 10.0.0.1 ping statistics --- 00:19:44.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.658 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2356814 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2356814 00:19:44.658 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2356814 ']' 00:19:44.659 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.659 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.659 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.659 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.659 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.916 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.916 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:44.916 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.916 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.916 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2356954 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f659906d4716af483af7adc4c0ca8b76e9ccad41d0300c6e 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DCs 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f659906d4716af483af7adc4c0ca8b76e9ccad41d0300c6e 0 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f659906d4716af483af7adc4c0ca8b76e9ccad41d0300c6e 0 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f659906d4716af483af7adc4c0ca8b76e9ccad41d0300c6e 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DCs 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DCs 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DCs 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa1a1dcbabe80427da469b6a964ea305db024551d7c3d91ab2c7e42a11bc001c 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tjp 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa1a1dcbabe80427da469b6a964ea305db024551d7c3d91ab2c7e42a11bc001c 3 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa1a1dcbabe80427da469b6a964ea305db024551d7c3d91ab2c7e42a11bc001c 3 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa1a1dcbabe80427da469b6a964ea305db024551d7c3d91ab2c7e42a11bc001c 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:44.917 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tjp 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tjp 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.tjp 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0bd31307f8b4b5709f1fa00694ccc288 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.70y 00:19:45.175 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0bd31307f8b4b5709f1fa00694ccc288 1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0bd31307f8b4b5709f1fa00694ccc288 1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0bd31307f8b4b5709f1fa00694ccc288 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.70y 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.70y 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.70y 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0256a82ba9a05ff23ba3967b7cc3492c253a5b189730ec39 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qM9 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0256a82ba9a05ff23ba3967b7cc3492c253a5b189730ec39 2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0256a82ba9a05ff23ba3967b7cc3492c253a5b189730ec39 2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0256a82ba9a05ff23ba3967b7cc3492c253a5b189730ec39 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qM9 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qM9 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.qM9 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=59dbfd6c5e0b14c90f7e1e3b78d8f60c334b30f7879b86b2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HQf 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 59dbfd6c5e0b14c90f7e1e3b78d8f60c334b30f7879b86b2 2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 59dbfd6c5e0b14c90f7e1e3b78d8f60c334b30f7879b86b2 2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=59dbfd6c5e0b14c90f7e1e3b78d8f60c334b30f7879b86b2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HQf 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HQf 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.HQf 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a8d14079d1ec18592fedb39ae4c871d 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Nas 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a8d14079d1ec18592fedb39ae4c871d 1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a8d14079d1ec18592fedb39ae4c871d 1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a8d14079d1ec18592fedb39ae4c871d 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Nas 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Nas 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Nas 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0fabe515a42629ea12acbd8e1c879ea0ec5c93895b25cbc4eaae3dbb87b3dcf4 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.A3h 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0fabe515a42629ea12acbd8e1c879ea0ec5c93895b25cbc4eaae3dbb87b3dcf4 3 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0fabe515a42629ea12acbd8e1c879ea0ec5c93895b25cbc4eaae3dbb87b3dcf4 3 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0fabe515a42629ea12acbd8e1c879ea0ec5c93895b25cbc4eaae3dbb87b3dcf4 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:45.176 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.A3h 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.A3h 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.A3h 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2356814 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2356814 ']' 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.434 18:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2356954 /var/tmp/host.sock 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2356954 ']' 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.434 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:45.435 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.435 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.692 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.692 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:45.692 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:45.692 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.692 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DCs 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DCs 00:19:45.949 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DCs 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.tjp ]] 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tjp 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tjp 00:19:46.205 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tjp 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.70y 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.70y 00:19:46.462 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.70y 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.qM9 ]] 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qM9 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qM9 00:19:46.719 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qM9 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HQf 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HQf 00:19:46.976 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HQf 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Nas ]] 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nas 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nas 00:19:47.234 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nas 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.A3h 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.491 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.A3h 00:19:47.492 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.A3h 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.492 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.750 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.315 00:19:48.315 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.315 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.315 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.315 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.572 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.572 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.572 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.572 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.572 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.572 { 00:19:48.572 "cntlid": 1, 00:19:48.572 "qid": 0, 00:19:48.572 "state": "enabled", 00:19:48.572 "thread": "nvmf_tgt_poll_group_000", 00:19:48.572 "listen_address": { 00:19:48.572 "trtype": "TCP", 00:19:48.572 "adrfam": "IPv4", 00:19:48.572 "traddr": "10.0.0.2", 00:19:48.572 "trsvcid": "4420" 00:19:48.572 }, 00:19:48.572 "peer_address": { 00:19:48.572 "trtype": "TCP", 00:19:48.572 "adrfam": "IPv4", 00:19:48.572 "traddr": "10.0.0.1", 00:19:48.572 "trsvcid": "59760" 00:19:48.572 }, 00:19:48.572 "auth": { 00:19:48.572 "state": "completed", 00:19:48.572 "digest": "sha256", 00:19:48.573 "dhgroup": "null" 00:19:48.573 } 00:19:48.573 } 00:19:48.573 ]' 00:19:48.573 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.573 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.830 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.763 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.021 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.279 00:19:50.279 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.279 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.279 18:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.537 { 00:19:50.537 "cntlid": 3, 00:19:50.537 "qid": 0, 00:19:50.537 "state": "enabled", 00:19:50.537 "thread": "nvmf_tgt_poll_group_000", 00:19:50.537 "listen_address": { 00:19:50.537 "trtype": "TCP", 00:19:50.537 "adrfam": "IPv4", 00:19:50.537 "traddr": "10.0.0.2", 00:19:50.537 "trsvcid": "4420" 00:19:50.537 }, 00:19:50.537 "peer_address": { 00:19:50.537 "trtype": "TCP", 00:19:50.537 "adrfam": "IPv4", 00:19:50.537 "traddr": "10.0.0.1", 00:19:50.537 "trsvcid": "59784" 00:19:50.537 }, 00:19:50.537 "auth": { 00:19:50.537 "state": "completed", 00:19:50.537 "digest": "sha256", 00:19:50.537 "dhgroup": "null" 00:19:50.537 } 00:19:50.537 } 00:19:50.537 ]' 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.537 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.795 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.725 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.983 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.240 00:19:52.240 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.240 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.240 18:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.498 { 00:19:52.498 "cntlid": 5, 00:19:52.498 "qid": 0, 00:19:52.498 "state": "enabled", 00:19:52.498 "thread": "nvmf_tgt_poll_group_000", 00:19:52.498 "listen_address": { 00:19:52.498 "trtype": "TCP", 00:19:52.498 "adrfam": "IPv4", 00:19:52.498 "traddr": "10.0.0.2", 00:19:52.498 "trsvcid": "4420" 00:19:52.498 }, 00:19:52.498 "peer_address": { 00:19:52.498 "trtype": "TCP", 00:19:52.498 "adrfam": "IPv4", 00:19:52.498 "traddr": "10.0.0.1", 00:19:52.498 "trsvcid": "59808" 00:19:52.498 }, 00:19:52.498 "auth": { 00:19:52.498 "state": "completed", 00:19:52.498 "digest": "sha256", 00:19:52.498 "dhgroup": "null" 00:19:52.498 } 00:19:52.498 } 00:19:52.498 ]' 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.498 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.755 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.755 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.755 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.012 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.979 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.236 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.236 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.236 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.493 00:19:54.493 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.493 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.493 18:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.748 { 00:19:54.748 "cntlid": 7, 00:19:54.748 "qid": 0, 00:19:54.748 "state": "enabled", 00:19:54.748 "thread": "nvmf_tgt_poll_group_000", 00:19:54.748 "listen_address": { 00:19:54.748 "trtype": "TCP", 00:19:54.748 "adrfam": "IPv4", 00:19:54.748 "traddr": "10.0.0.2", 00:19:54.748 "trsvcid": "4420" 00:19:54.748 }, 00:19:54.748 "peer_address": { 00:19:54.748 "trtype": "TCP", 00:19:54.748 "adrfam": "IPv4", 00:19:54.748 "traddr": "10.0.0.1", 00:19:54.748 "trsvcid": "55214" 00:19:54.748 }, 00:19:54.748 "auth": { 00:19:54.748 "state": "completed", 00:19:54.748 "digest": "sha256", 00:19:54.748 "dhgroup": "null" 00:19:54.748 } 00:19:54.748 } 00:19:54.748 ]' 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.748 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.005 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.934 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.190 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.447 00:19:56.448 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.448 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.448 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.705 { 00:19:56.705 "cntlid": 9, 00:19:56.705 "qid": 0, 00:19:56.705 "state": "enabled", 00:19:56.705 "thread": "nvmf_tgt_poll_group_000", 00:19:56.705 "listen_address": { 00:19:56.705 "trtype": "TCP", 00:19:56.705 "adrfam": "IPv4", 00:19:56.705 "traddr": "10.0.0.2", 00:19:56.705 "trsvcid": "4420" 00:19:56.705 }, 00:19:56.705 "peer_address": { 00:19:56.705 "trtype": "TCP", 00:19:56.705 "adrfam": "IPv4", 00:19:56.705 "traddr": "10.0.0.1", 00:19:56.705 "trsvcid": "55234" 00:19:56.705 }, 00:19:56.705 "auth": { 00:19:56.705 "state": "completed", 00:19:56.705 "digest": "sha256", 00:19:56.705 "dhgroup": "ffdhe2048" 00:19:56.705 } 00:19:56.705 } 00:19:56.705 ]' 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.705 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.962 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.962 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.963 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.220 18:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.154 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.412 00:19:58.412 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.412 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.412 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.670 { 00:19:58.670 "cntlid": 11, 00:19:58.670 "qid": 0, 00:19:58.670 "state": "enabled", 00:19:58.670 "thread": "nvmf_tgt_poll_group_000", 00:19:58.670 "listen_address": { 00:19:58.670 "trtype": "TCP", 00:19:58.670 "adrfam": "IPv4", 00:19:58.670 "traddr": "10.0.0.2", 00:19:58.670 "trsvcid": "4420" 00:19:58.670 }, 00:19:58.670 "peer_address": { 00:19:58.670 "trtype": "TCP", 00:19:58.670 "adrfam": "IPv4", 00:19:58.670 "traddr": "10.0.0.1", 00:19:58.670 "trsvcid": "55246" 00:19:58.670 }, 00:19:58.670 "auth": { 00:19:58.670 "state": "completed", 00:19:58.670 "digest": "sha256", 00:19:58.670 "dhgroup": "ffdhe2048" 00:19:58.670 } 00:19:58.670 } 00:19:58.670 ]' 00:19:58.670 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.929 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.187 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.121 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.379 18:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.637 00:20:00.637 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.637 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.637 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.893 { 00:20:00.893 "cntlid": 13, 00:20:00.893 "qid": 0, 00:20:00.893 "state": "enabled", 00:20:00.893 "thread": "nvmf_tgt_poll_group_000", 00:20:00.893 "listen_address": { 00:20:00.893 "trtype": "TCP", 00:20:00.893 "adrfam": "IPv4", 00:20:00.893 "traddr": "10.0.0.2", 00:20:00.893 "trsvcid": "4420" 00:20:00.893 }, 00:20:00.893 "peer_address": { 00:20:00.893 "trtype": "TCP", 00:20:00.893 "adrfam": "IPv4", 00:20:00.893 "traddr": "10.0.0.1", 00:20:00.893 "trsvcid": "55280" 00:20:00.893 }, 00:20:00.893 "auth": { 00:20:00.893 "state": "completed", 00:20:00.893 "digest": "sha256", 00:20:00.893 "dhgroup": "ffdhe2048" 00:20:00.893 } 00:20:00.893 } 00:20:00.893 ]' 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.893 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.151 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.084 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.342 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.907 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.908 { 00:20:02.908 "cntlid": 15, 00:20:02.908 "qid": 0, 00:20:02.908 "state": "enabled", 00:20:02.908 "thread": "nvmf_tgt_poll_group_000", 00:20:02.908 "listen_address": { 00:20:02.908 "trtype": "TCP", 00:20:02.908 "adrfam": "IPv4", 00:20:02.908 "traddr": "10.0.0.2", 00:20:02.908 "trsvcid": "4420" 00:20:02.908 }, 00:20:02.908 "peer_address": { 00:20:02.908 "trtype": "TCP", 00:20:02.908 "adrfam": "IPv4", 00:20:02.908 "traddr": "10.0.0.1", 00:20:02.908 "trsvcid": "33180" 00:20:02.908 }, 00:20:02.908 "auth": { 00:20:02.908 "state": "completed", 00:20:02.908 "digest": "sha256", 00:20:02.908 "dhgroup": "ffdhe2048" 00:20:02.908 } 00:20:02.908 } 00:20:02.908 ]' 00:20:02.908 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.273 18:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:04.204 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.205 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.462 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.026 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.026 { 00:20:05.026 "cntlid": 17, 00:20:05.026 "qid": 0, 00:20:05.026 "state": "enabled", 00:20:05.026 "thread": "nvmf_tgt_poll_group_000", 00:20:05.026 "listen_address": { 00:20:05.026 "trtype": "TCP", 00:20:05.026 "adrfam": "IPv4", 00:20:05.026 "traddr": "10.0.0.2", 00:20:05.026 "trsvcid": "4420" 00:20:05.026 }, 00:20:05.026 "peer_address": { 00:20:05.026 "trtype": "TCP", 00:20:05.026 "adrfam": "IPv4", 00:20:05.026 "traddr": "10.0.0.1", 00:20:05.026 "trsvcid": "33212" 00:20:05.026 }, 00:20:05.026 "auth": { 00:20:05.026 "state": "completed", 00:20:05.026 "digest": "sha256", 00:20:05.026 "dhgroup": "ffdhe3072" 00:20:05.026 } 00:20:05.026 } 00:20:05.026 ]' 00:20:05.026 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.283 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.539 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.470 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.728 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.986 00:20:06.986 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.986 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.986 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.243 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.243 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.243 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.244 { 00:20:07.244 "cntlid": 19, 00:20:07.244 "qid": 0, 00:20:07.244 "state": "enabled", 00:20:07.244 "thread": "nvmf_tgt_poll_group_000", 00:20:07.244 "listen_address": { 00:20:07.244 "trtype": "TCP", 00:20:07.244 "adrfam": "IPv4", 00:20:07.244 "traddr": "10.0.0.2", 00:20:07.244 "trsvcid": "4420" 00:20:07.244 }, 00:20:07.244 "peer_address": { 00:20:07.244 "trtype": "TCP", 00:20:07.244 "adrfam": "IPv4", 00:20:07.244 "traddr": "10.0.0.1", 00:20:07.244 "trsvcid": "33242" 00:20:07.244 }, 00:20:07.244 "auth": { 00:20:07.244 "state": "completed", 00:20:07.244 "digest": "sha256", 00:20:07.244 "dhgroup": "ffdhe3072" 00:20:07.244 } 00:20:07.244 } 00:20:07.244 ]' 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.244 18:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.501 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:08.434 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.434 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.693 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.258 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.258 { 00:20:09.258 "cntlid": 21, 00:20:09.258 "qid": 0, 00:20:09.258 "state": "enabled", 00:20:09.258 "thread": "nvmf_tgt_poll_group_000", 00:20:09.258 "listen_address": { 00:20:09.258 "trtype": "TCP", 00:20:09.258 "adrfam": "IPv4", 00:20:09.258 "traddr": "10.0.0.2", 00:20:09.258 "trsvcid": "4420" 00:20:09.258 }, 00:20:09.258 "peer_address": { 00:20:09.258 "trtype": "TCP", 00:20:09.258 "adrfam": "IPv4", 00:20:09.258 "traddr": "10.0.0.1", 00:20:09.258 "trsvcid": "33260" 00:20:09.258 }, 00:20:09.258 "auth": { 00:20:09.258 "state": "completed", 00:20:09.258 "digest": "sha256", 00:20:09.258 "dhgroup": "ffdhe3072" 00:20:09.258 } 00:20:09.258 } 00:20:09.258 ]' 00:20:09.258 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.516 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.774 18:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.709 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.967 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.532 00:20:11.532 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.532 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.532 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.532 { 00:20:11.532 "cntlid": 23, 00:20:11.532 "qid": 0, 00:20:11.532 "state": "enabled", 00:20:11.532 "thread": "nvmf_tgt_poll_group_000", 00:20:11.532 "listen_address": { 00:20:11.532 "trtype": "TCP", 00:20:11.532 "adrfam": "IPv4", 00:20:11.532 "traddr": "10.0.0.2", 00:20:11.532 "trsvcid": "4420" 00:20:11.532 }, 00:20:11.532 "peer_address": { 00:20:11.532 "trtype": "TCP", 00:20:11.532 "adrfam": "IPv4", 00:20:11.532 "traddr": "10.0.0.1", 00:20:11.532 "trsvcid": "33292" 00:20:11.532 }, 00:20:11.532 "auth": { 00:20:11.532 "state": "completed", 00:20:11.532 "digest": "sha256", 00:20:11.532 "dhgroup": "ffdhe3072" 00:20:11.532 } 00:20:11.532 } 00:20:11.532 ]' 00:20:11.532 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.790 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.047 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.981 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.238 18:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.496 00:20:13.496 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.496 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.496 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.753 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.753 { 00:20:13.753 "cntlid": 25, 00:20:13.753 "qid": 0, 00:20:13.753 "state": "enabled", 00:20:13.753 "thread": "nvmf_tgt_poll_group_000", 00:20:13.753 "listen_address": { 00:20:13.753 "trtype": "TCP", 00:20:13.753 "adrfam": "IPv4", 00:20:13.753 "traddr": "10.0.0.2", 00:20:13.753 "trsvcid": "4420" 00:20:13.753 }, 00:20:13.753 "peer_address": { 00:20:13.753 "trtype": "TCP", 00:20:13.753 "adrfam": "IPv4", 00:20:13.753 "traddr": "10.0.0.1", 00:20:13.753 "trsvcid": "38686" 00:20:13.753 }, 00:20:13.753 "auth": { 00:20:13.753 "state": "completed", 00:20:13.753 "digest": "sha256", 00:20:13.753 "dhgroup": "ffdhe4096" 00:20:13.753 } 00:20:13.754 } 00:20:13.754 ]' 00:20:13.754 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.754 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.754 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.011 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.011 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.011 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.011 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.011 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.268 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.201 18:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.766 00:20:15.766 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.766 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.766 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.023 { 00:20:16.023 "cntlid": 27, 00:20:16.023 "qid": 0, 00:20:16.023 "state": "enabled", 00:20:16.023 "thread": "nvmf_tgt_poll_group_000", 00:20:16.023 "listen_address": { 00:20:16.023 "trtype": "TCP", 00:20:16.023 "adrfam": "IPv4", 00:20:16.023 "traddr": "10.0.0.2", 00:20:16.023 "trsvcid": "4420" 00:20:16.023 }, 00:20:16.023 "peer_address": { 00:20:16.023 "trtype": "TCP", 00:20:16.023 "adrfam": "IPv4", 00:20:16.023 "traddr": "10.0.0.1", 00:20:16.023 "trsvcid": "38708" 00:20:16.023 }, 00:20:16.023 "auth": { 00:20:16.023 "state": "completed", 00:20:16.023 "digest": "sha256", 00:20:16.023 "dhgroup": "ffdhe4096" 00:20:16.023 } 00:20:16.023 } 00:20:16.023 ]' 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.023 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.024 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.280 18:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:17.212 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.212 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.213 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.471 18:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.729 00:20:17.729 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.729 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.729 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.986 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.986 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.986 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.986 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.986 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.987 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.987 { 00:20:17.987 "cntlid": 29, 00:20:17.987 "qid": 0, 00:20:17.987 "state": "enabled", 00:20:17.987 "thread": "nvmf_tgt_poll_group_000", 00:20:17.987 "listen_address": { 00:20:17.987 "trtype": "TCP", 00:20:17.987 "adrfam": "IPv4", 00:20:17.987 "traddr": "10.0.0.2", 00:20:17.987 "trsvcid": "4420" 00:20:17.987 }, 00:20:17.987 "peer_address": { 00:20:17.987 "trtype": "TCP", 00:20:17.987 "adrfam": "IPv4", 00:20:17.987 "traddr": "10.0.0.1", 00:20:17.987 "trsvcid": "38730" 00:20:17.987 }, 00:20:17.987 "auth": { 00:20:17.987 "state": "completed", 00:20:17.987 "digest": "sha256", 00:20:17.987 "dhgroup": "ffdhe4096" 00:20:17.987 } 00:20:17.987 } 00:20:17.987 ]' 00:20:17.987 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.987 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.987 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.244 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.244 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.244 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.244 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.244 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.503 18:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.436 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.694 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.953 00:20:19.953 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.953 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.953 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.211 { 00:20:20.211 "cntlid": 31, 00:20:20.211 "qid": 0, 00:20:20.211 "state": "enabled", 00:20:20.211 "thread": "nvmf_tgt_poll_group_000", 00:20:20.211 "listen_address": { 00:20:20.211 "trtype": "TCP", 00:20:20.211 "adrfam": "IPv4", 00:20:20.211 "traddr": "10.0.0.2", 00:20:20.211 "trsvcid": "4420" 00:20:20.211 }, 00:20:20.211 "peer_address": { 00:20:20.211 "trtype": "TCP", 00:20:20.211 "adrfam": "IPv4", 00:20:20.211 "traddr": "10.0.0.1", 00:20:20.211 "trsvcid": "38766" 00:20:20.211 }, 00:20:20.211 "auth": { 00:20:20.211 "state": "completed", 00:20:20.211 "digest": "sha256", 00:20:20.211 "dhgroup": "ffdhe4096" 00:20:20.211 } 00:20:20.211 } 00:20:20.211 ]' 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.211 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.469 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.469 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.469 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.726 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.659 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.225 00:20:22.225 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.225 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.225 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.790 { 00:20:22.790 "cntlid": 33, 00:20:22.790 "qid": 0, 00:20:22.790 "state": "enabled", 00:20:22.790 "thread": "nvmf_tgt_poll_group_000", 00:20:22.790 "listen_address": { 00:20:22.790 "trtype": "TCP", 00:20:22.790 "adrfam": "IPv4", 00:20:22.790 "traddr": "10.0.0.2", 00:20:22.790 "trsvcid": "4420" 00:20:22.790 }, 00:20:22.790 "peer_address": { 00:20:22.790 "trtype": "TCP", 00:20:22.790 "adrfam": "IPv4", 00:20:22.790 "traddr": "10.0.0.1", 00:20:22.790 "trsvcid": "38804" 00:20:22.790 }, 00:20:22.790 "auth": { 00:20:22.790 "state": "completed", 00:20:22.790 "digest": "sha256", 00:20:22.790 "dhgroup": "ffdhe6144" 00:20:22.790 } 00:20:22.790 } 00:20:22.790 ]' 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.790 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.048 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.980 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.237 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.802 00:20:24.802 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.802 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.802 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.060 { 00:20:25.060 "cntlid": 35, 00:20:25.060 "qid": 0, 00:20:25.060 "state": "enabled", 00:20:25.060 "thread": "nvmf_tgt_poll_group_000", 00:20:25.060 "listen_address": { 00:20:25.060 "trtype": "TCP", 00:20:25.060 "adrfam": "IPv4", 00:20:25.060 "traddr": "10.0.0.2", 00:20:25.060 "trsvcid": "4420" 00:20:25.060 }, 00:20:25.060 "peer_address": { 00:20:25.060 "trtype": "TCP", 00:20:25.060 "adrfam": "IPv4", 00:20:25.060 "traddr": "10.0.0.1", 00:20:25.060 "trsvcid": "50596" 00:20:25.060 }, 00:20:25.060 "auth": { 00:20:25.060 "state": "completed", 00:20:25.060 "digest": "sha256", 00:20:25.060 "dhgroup": "ffdhe6144" 00:20:25.060 } 00:20:25.060 } 00:20:25.060 ]' 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.060 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.317 18:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.248 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.536 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.537 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.537 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.537 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.537 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.101 00:20:27.101 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.101 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.101 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.358 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.358 { 00:20:27.358 "cntlid": 37, 00:20:27.358 "qid": 0, 00:20:27.358 "state": "enabled", 00:20:27.358 "thread": "nvmf_tgt_poll_group_000", 00:20:27.358 "listen_address": { 00:20:27.358 "trtype": "TCP", 00:20:27.358 "adrfam": "IPv4", 00:20:27.358 "traddr": "10.0.0.2", 00:20:27.358 "trsvcid": "4420" 00:20:27.358 }, 00:20:27.358 "peer_address": { 00:20:27.358 "trtype": "TCP", 00:20:27.359 "adrfam": "IPv4", 00:20:27.359 "traddr": "10.0.0.1", 00:20:27.359 "trsvcid": "50610" 00:20:27.359 }, 00:20:27.359 "auth": { 00:20:27.359 "state": "completed", 00:20:27.359 "digest": "sha256", 00:20:27.359 "dhgroup": "ffdhe6144" 00:20:27.359 } 00:20:27.359 } 00:20:27.359 ]' 00:20:27.359 18:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.359 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.359 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.616 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.616 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.616 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.616 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.616 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.874 18:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.806 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.064 18:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.628 00:20:29.628 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.628 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.628 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.886 { 00:20:29.886 "cntlid": 39, 00:20:29.886 "qid": 0, 00:20:29.886 "state": "enabled", 00:20:29.886 "thread": "nvmf_tgt_poll_group_000", 00:20:29.886 "listen_address": { 00:20:29.886 "trtype": "TCP", 00:20:29.886 "adrfam": "IPv4", 00:20:29.886 "traddr": "10.0.0.2", 00:20:29.886 "trsvcid": "4420" 00:20:29.886 }, 00:20:29.886 "peer_address": { 00:20:29.886 "trtype": "TCP", 00:20:29.886 "adrfam": "IPv4", 00:20:29.886 "traddr": "10.0.0.1", 00:20:29.886 "trsvcid": "50638" 00:20:29.886 }, 00:20:29.886 "auth": { 00:20:29.886 "state": "completed", 00:20:29.886 "digest": "sha256", 00:20:29.886 "dhgroup": "ffdhe6144" 00:20:29.886 } 00:20:29.886 } 00:20:29.886 ]' 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.886 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.144 18:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.077 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.335 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.267 00:20:32.267 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.267 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.267 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.267 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.525 { 00:20:32.525 "cntlid": 41, 00:20:32.525 "qid": 0, 00:20:32.525 "state": "enabled", 00:20:32.525 "thread": "nvmf_tgt_poll_group_000", 00:20:32.525 "listen_address": { 00:20:32.525 "trtype": "TCP", 00:20:32.525 "adrfam": "IPv4", 00:20:32.525 "traddr": "10.0.0.2", 00:20:32.525 "trsvcid": "4420" 00:20:32.525 }, 00:20:32.525 "peer_address": { 00:20:32.525 "trtype": "TCP", 00:20:32.525 "adrfam": "IPv4", 00:20:32.525 "traddr": "10.0.0.1", 00:20:32.525 "trsvcid": "50664" 00:20:32.525 }, 00:20:32.525 "auth": { 00:20:32.525 "state": "completed", 00:20:32.525 "digest": "sha256", 00:20:32.525 "dhgroup": "ffdhe8192" 00:20:32.525 } 00:20:32.525 } 00:20:32.525 ]' 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.525 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.525 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.525 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.525 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.525 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.525 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.783 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.716 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.974 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.907 00:20:34.907 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.907 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.907 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.165 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.166 { 00:20:35.166 "cntlid": 43, 00:20:35.166 "qid": 0, 00:20:35.166 "state": "enabled", 00:20:35.166 "thread": "nvmf_tgt_poll_group_000", 00:20:35.166 "listen_address": { 00:20:35.166 "trtype": "TCP", 00:20:35.166 "adrfam": "IPv4", 00:20:35.166 "traddr": "10.0.0.2", 00:20:35.166 "trsvcid": "4420" 00:20:35.166 }, 00:20:35.166 "peer_address": { 00:20:35.166 "trtype": "TCP", 00:20:35.166 "adrfam": "IPv4", 00:20:35.166 "traddr": "10.0.0.1", 00:20:35.166 "trsvcid": "33286" 00:20:35.166 }, 00:20:35.166 "auth": { 00:20:35.166 "state": "completed", 00:20:35.166 "digest": "sha256", 00:20:35.166 "dhgroup": "ffdhe8192" 00:20:35.166 } 00:20:35.166 } 00:20:35.166 ]' 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.166 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.423 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.355 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.613 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.545 00:20:37.545 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.545 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.545 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.803 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.804 { 00:20:37.804 "cntlid": 45, 00:20:37.804 "qid": 0, 00:20:37.804 "state": "enabled", 00:20:37.804 "thread": "nvmf_tgt_poll_group_000", 00:20:37.804 "listen_address": { 00:20:37.804 "trtype": "TCP", 00:20:37.804 "adrfam": "IPv4", 00:20:37.804 "traddr": "10.0.0.2", 00:20:37.804 "trsvcid": "4420" 00:20:37.804 }, 00:20:37.804 "peer_address": { 00:20:37.804 "trtype": "TCP", 00:20:37.804 "adrfam": "IPv4", 00:20:37.804 "traddr": "10.0.0.1", 00:20:37.804 "trsvcid": "33302" 00:20:37.804 }, 00:20:37.804 "auth": { 00:20:37.804 "state": "completed", 00:20:37.804 "digest": "sha256", 00:20:37.804 "dhgroup": "ffdhe8192" 00:20:37.804 } 00:20:37.804 } 00:20:37.804 ]' 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.804 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.062 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.995 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.253 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.185 00:20:40.185 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.185 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.185 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.443 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.443 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.443 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.443 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.443 { 00:20:40.443 "cntlid": 47, 00:20:40.443 "qid": 0, 00:20:40.443 "state": "enabled", 00:20:40.443 "thread": "nvmf_tgt_poll_group_000", 00:20:40.443 "listen_address": { 00:20:40.443 "trtype": "TCP", 00:20:40.443 "adrfam": "IPv4", 00:20:40.443 "traddr": "10.0.0.2", 00:20:40.443 "trsvcid": "4420" 00:20:40.443 }, 00:20:40.443 "peer_address": { 00:20:40.443 "trtype": "TCP", 00:20:40.443 "adrfam": "IPv4", 00:20:40.443 "traddr": "10.0.0.1", 00:20:40.443 "trsvcid": "33330" 00:20:40.443 }, 00:20:40.443 "auth": { 00:20:40.443 "state": "completed", 00:20:40.443 "digest": "sha256", 00:20:40.443 "dhgroup": "ffdhe8192" 00:20:40.443 } 00:20:40.443 } 00:20:40.443 ]' 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.443 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.701 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.701 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.701 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.958 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.892 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.150 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.408 00:20:42.408 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.408 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.408 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.666 { 00:20:42.666 "cntlid": 49, 00:20:42.666 "qid": 0, 00:20:42.666 "state": "enabled", 00:20:42.666 "thread": "nvmf_tgt_poll_group_000", 00:20:42.666 "listen_address": { 00:20:42.666 "trtype": "TCP", 00:20:42.666 "adrfam": "IPv4", 00:20:42.666 "traddr": "10.0.0.2", 00:20:42.666 "trsvcid": "4420" 00:20:42.666 }, 00:20:42.666 "peer_address": { 00:20:42.666 "trtype": "TCP", 00:20:42.666 "adrfam": "IPv4", 00:20:42.666 "traddr": "10.0.0.1", 00:20:42.666 "trsvcid": "33342" 00:20:42.666 }, 00:20:42.666 "auth": { 00:20:42.666 "state": "completed", 00:20:42.666 "digest": "sha384", 00:20:42.666 "dhgroup": "null" 00:20:42.666 } 00:20:42.666 } 00:20:42.666 ]' 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.666 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.924 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.895 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.153 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.411 00:20:44.411 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.411 18:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.411 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.670 { 00:20:44.670 "cntlid": 51, 00:20:44.670 "qid": 0, 00:20:44.670 "state": "enabled", 00:20:44.670 "thread": "nvmf_tgt_poll_group_000", 00:20:44.670 "listen_address": { 00:20:44.670 "trtype": "TCP", 00:20:44.670 "adrfam": "IPv4", 00:20:44.670 "traddr": "10.0.0.2", 00:20:44.670 "trsvcid": "4420" 00:20:44.670 }, 00:20:44.670 "peer_address": { 00:20:44.670 "trtype": "TCP", 00:20:44.670 "adrfam": "IPv4", 00:20:44.670 "traddr": "10.0.0.1", 00:20:44.670 "trsvcid": "39828" 00:20:44.670 }, 00:20:44.670 "auth": { 00:20:44.670 "state": "completed", 00:20:44.670 "digest": "sha384", 00:20:44.670 "dhgroup": "null" 00:20:44.670 } 00:20:44.670 } 00:20:44.670 ]' 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.670 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.927 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:44.927 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.927 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.927 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.927 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.184 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.116 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.373 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.374 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.374 18:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.630 00:20:46.630 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.630 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.630 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.888 { 00:20:46.888 "cntlid": 53, 00:20:46.888 "qid": 0, 00:20:46.888 "state": "enabled", 00:20:46.888 "thread": "nvmf_tgt_poll_group_000", 00:20:46.888 "listen_address": { 00:20:46.888 "trtype": "TCP", 00:20:46.888 "adrfam": "IPv4", 00:20:46.888 "traddr": "10.0.0.2", 00:20:46.888 "trsvcid": "4420" 00:20:46.888 }, 00:20:46.888 "peer_address": { 00:20:46.888 "trtype": "TCP", 00:20:46.888 "adrfam": "IPv4", 00:20:46.888 "traddr": "10.0.0.1", 00:20:46.888 "trsvcid": "39862" 00:20:46.888 }, 00:20:46.888 "auth": { 00:20:46.888 "state": "completed", 00:20:46.888 "digest": "sha384", 00:20:46.888 "dhgroup": "null" 00:20:46.888 } 00:20:46.888 } 00:20:46.888 ]' 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.888 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.146 18:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:48.077 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.077 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.077 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.077 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.078 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.078 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.078 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.078 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.336 18:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.902 00:20:48.902 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.902 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.902 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.159 { 00:20:49.159 "cntlid": 55, 00:20:49.159 "qid": 0, 00:20:49.159 "state": "enabled", 00:20:49.159 "thread": "nvmf_tgt_poll_group_000", 00:20:49.159 "listen_address": { 00:20:49.159 "trtype": "TCP", 00:20:49.159 "adrfam": "IPv4", 00:20:49.159 "traddr": "10.0.0.2", 00:20:49.159 "trsvcid": "4420" 00:20:49.159 }, 00:20:49.159 "peer_address": { 00:20:49.159 "trtype": "TCP", 00:20:49.159 "adrfam": "IPv4", 00:20:49.159 "traddr": "10.0.0.1", 00:20:49.159 "trsvcid": "39902" 00:20:49.159 }, 00:20:49.159 "auth": { 00:20:49.159 "state": "completed", 00:20:49.159 "digest": "sha384", 00:20:49.159 "dhgroup": "null" 00:20:49.159 } 00:20:49.159 } 00:20:49.159 ]' 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.159 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.160 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:49.160 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.160 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.160 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.160 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.417 18:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.350 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.608 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.867 00:20:50.867 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.867 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.867 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.126 { 00:20:51.126 "cntlid": 57, 00:20:51.126 "qid": 0, 00:20:51.126 "state": "enabled", 00:20:51.126 "thread": "nvmf_tgt_poll_group_000", 00:20:51.126 "listen_address": { 00:20:51.126 "trtype": "TCP", 00:20:51.126 "adrfam": "IPv4", 00:20:51.126 "traddr": "10.0.0.2", 00:20:51.126 "trsvcid": "4420" 00:20:51.126 }, 00:20:51.126 "peer_address": { 00:20:51.126 "trtype": "TCP", 00:20:51.126 "adrfam": "IPv4", 00:20:51.126 "traddr": "10.0.0.1", 00:20:51.126 "trsvcid": "39932" 00:20:51.126 }, 00:20:51.126 "auth": { 00:20:51.126 "state": "completed", 00:20:51.126 "digest": "sha384", 00:20:51.126 "dhgroup": "ffdhe2048" 00:20:51.126 } 00:20:51.126 } 00:20:51.126 ]' 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.126 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.384 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.316 18:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.575 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.832 00:20:52.833 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.833 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.833 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.091 { 00:20:53.091 "cntlid": 59, 00:20:53.091 "qid": 0, 00:20:53.091 "state": "enabled", 00:20:53.091 "thread": "nvmf_tgt_poll_group_000", 00:20:53.091 "listen_address": { 00:20:53.091 "trtype": "TCP", 00:20:53.091 "adrfam": "IPv4", 00:20:53.091 "traddr": "10.0.0.2", 00:20:53.091 "trsvcid": "4420" 00:20:53.091 }, 00:20:53.091 "peer_address": { 00:20:53.091 "trtype": "TCP", 00:20:53.091 "adrfam": "IPv4", 00:20:53.091 "traddr": "10.0.0.1", 00:20:53.091 "trsvcid": "55834" 00:20:53.091 }, 00:20:53.091 "auth": { 00:20:53.091 "state": "completed", 00:20:53.091 "digest": "sha384", 00:20:53.091 "dhgroup": "ffdhe2048" 00:20:53.091 } 00:20:53.091 } 00:20:53.091 ]' 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.091 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.349 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.349 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.349 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.349 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.349 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.606 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.540 18:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.798 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.056 00:20:55.056 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.056 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.056 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.314 { 00:20:55.314 "cntlid": 61, 00:20:55.314 "qid": 0, 00:20:55.314 "state": "enabled", 00:20:55.314 "thread": "nvmf_tgt_poll_group_000", 00:20:55.314 "listen_address": { 00:20:55.314 "trtype": "TCP", 00:20:55.314 "adrfam": "IPv4", 00:20:55.314 "traddr": "10.0.0.2", 00:20:55.314 "trsvcid": "4420" 00:20:55.314 }, 00:20:55.314 "peer_address": { 00:20:55.314 "trtype": "TCP", 00:20:55.314 "adrfam": "IPv4", 00:20:55.314 "traddr": "10.0.0.1", 00:20:55.314 "trsvcid": "55860" 00:20:55.314 }, 00:20:55.314 "auth": { 00:20:55.314 "state": "completed", 00:20:55.314 "digest": "sha384", 00:20:55.314 "dhgroup": "ffdhe2048" 00:20:55.314 } 00:20:55.314 } 00:20:55.314 ]' 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.314 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.572 18:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.505 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.763 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.020 00:20:57.020 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.020 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.021 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.279 { 00:20:57.279 "cntlid": 63, 00:20:57.279 "qid": 0, 00:20:57.279 "state": "enabled", 00:20:57.279 "thread": "nvmf_tgt_poll_group_000", 00:20:57.279 "listen_address": { 00:20:57.279 "trtype": "TCP", 00:20:57.279 "adrfam": "IPv4", 00:20:57.279 "traddr": "10.0.0.2", 00:20:57.279 "trsvcid": "4420" 00:20:57.279 }, 00:20:57.279 "peer_address": { 00:20:57.279 "trtype": "TCP", 00:20:57.279 "adrfam": "IPv4", 00:20:57.279 "traddr": "10.0.0.1", 00:20:57.279 "trsvcid": "55896" 00:20:57.279 }, 00:20:57.279 "auth": { 00:20:57.279 "state": "completed", 00:20:57.279 "digest": "sha384", 00:20:57.279 "dhgroup": "ffdhe2048" 00:20:57.279 } 00:20:57.279 } 00:20:57.279 ]' 00:20:57.279 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.537 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.537 18:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.537 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.537 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.537 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.537 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.537 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.795 18:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.729 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.987 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.246 00:20:59.246 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.246 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.246 18:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.539 { 00:20:59.539 "cntlid": 65, 00:20:59.539 "qid": 0, 00:20:59.539 "state": "enabled", 00:20:59.539 "thread": "nvmf_tgt_poll_group_000", 00:20:59.539 "listen_address": { 00:20:59.539 "trtype": "TCP", 00:20:59.539 "adrfam": "IPv4", 00:20:59.539 "traddr": "10.0.0.2", 00:20:59.539 "trsvcid": "4420" 00:20:59.539 }, 00:20:59.539 "peer_address": { 00:20:59.539 "trtype": "TCP", 00:20:59.539 "adrfam": "IPv4", 00:20:59.539 "traddr": "10.0.0.1", 00:20:59.539 "trsvcid": "55926" 00:20:59.539 }, 00:20:59.539 "auth": { 00:20:59.539 "state": "completed", 00:20:59.539 "digest": "sha384", 00:20:59.539 "dhgroup": "ffdhe3072" 00:20:59.539 } 00:20:59.539 } 00:20:59.539 ]' 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.539 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.797 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.797 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.797 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.797 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.797 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.054 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.987 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.553 00:21:01.553 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.553 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.553 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.811 { 00:21:01.811 "cntlid": 67, 00:21:01.811 "qid": 0, 00:21:01.811 "state": "enabled", 00:21:01.811 "thread": "nvmf_tgt_poll_group_000", 00:21:01.811 "listen_address": { 00:21:01.811 "trtype": "TCP", 00:21:01.811 "adrfam": "IPv4", 00:21:01.811 "traddr": "10.0.0.2", 00:21:01.811 "trsvcid": "4420" 00:21:01.811 }, 00:21:01.811 "peer_address": { 00:21:01.811 "trtype": "TCP", 00:21:01.811 "adrfam": "IPv4", 00:21:01.811 "traddr": "10.0.0.1", 00:21:01.811 "trsvcid": "55964" 00:21:01.811 }, 00:21:01.811 "auth": { 00:21:01.811 "state": "completed", 00:21:01.811 "digest": "sha384", 00:21:01.811 "dhgroup": "ffdhe3072" 00:21:01.811 } 00:21:01.811 } 00:21:01.811 ]' 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.811 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.068 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.002 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.260 18:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.518 00:21:03.518 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.518 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.518 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.776 { 00:21:03.776 "cntlid": 69, 00:21:03.776 "qid": 0, 00:21:03.776 "state": "enabled", 00:21:03.776 "thread": "nvmf_tgt_poll_group_000", 00:21:03.776 "listen_address": { 00:21:03.776 "trtype": "TCP", 00:21:03.776 "adrfam": "IPv4", 00:21:03.776 "traddr": "10.0.0.2", 00:21:03.776 "trsvcid": "4420" 00:21:03.776 }, 00:21:03.776 "peer_address": { 00:21:03.776 "trtype": "TCP", 00:21:03.776 "adrfam": "IPv4", 00:21:03.776 "traddr": "10.0.0.1", 00:21:03.776 "trsvcid": "59890" 00:21:03.776 }, 00:21:03.776 "auth": { 00:21:03.776 "state": "completed", 00:21:03.776 "digest": "sha384", 00:21:03.776 "dhgroup": "ffdhe3072" 00:21:03.776 } 00:21:03.776 } 00:21:03.776 ]' 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.776 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.033 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.033 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.033 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.291 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:05.223 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.223 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.223 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.223 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.223 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.224 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.789 00:21:05.789 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.789 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.789 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.046 { 00:21:06.046 "cntlid": 71, 00:21:06.046 "qid": 0, 00:21:06.046 "state": "enabled", 00:21:06.046 "thread": "nvmf_tgt_poll_group_000", 00:21:06.046 "listen_address": { 00:21:06.046 "trtype": "TCP", 00:21:06.046 "adrfam": "IPv4", 00:21:06.046 "traddr": "10.0.0.2", 00:21:06.046 "trsvcid": "4420" 00:21:06.046 }, 00:21:06.046 "peer_address": { 00:21:06.046 "trtype": "TCP", 00:21:06.046 "adrfam": "IPv4", 00:21:06.046 "traddr": "10.0.0.1", 00:21:06.046 "trsvcid": "59902" 00:21:06.046 }, 00:21:06.046 "auth": { 00:21:06.046 "state": "completed", 00:21:06.046 "digest": "sha384", 00:21:06.046 "dhgroup": "ffdhe3072" 00:21:06.046 } 00:21:06.046 } 00:21:06.046 ]' 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.046 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.303 18:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.235 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.492 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.750 00:21:07.750 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.750 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.750 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.008 { 00:21:08.008 "cntlid": 73, 00:21:08.008 "qid": 0, 00:21:08.008 "state": "enabled", 00:21:08.008 "thread": "nvmf_tgt_poll_group_000", 00:21:08.008 "listen_address": { 00:21:08.008 "trtype": "TCP", 00:21:08.008 "adrfam": "IPv4", 00:21:08.008 "traddr": "10.0.0.2", 00:21:08.008 "trsvcid": "4420" 00:21:08.008 }, 00:21:08.008 "peer_address": { 00:21:08.008 "trtype": "TCP", 00:21:08.008 "adrfam": "IPv4", 00:21:08.008 "traddr": "10.0.0.1", 00:21:08.008 "trsvcid": "59936" 00:21:08.008 }, 00:21:08.008 "auth": { 00:21:08.008 "state": "completed", 00:21:08.008 "digest": "sha384", 00:21:08.008 "dhgroup": "ffdhe4096" 00:21:08.008 } 00:21:08.008 } 00:21:08.008 ]' 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.008 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.266 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.266 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.266 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.523 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.453 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.709 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.966 00:21:09.966 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.966 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.966 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.222 { 00:21:10.222 "cntlid": 75, 00:21:10.222 "qid": 0, 00:21:10.222 "state": "enabled", 00:21:10.222 "thread": "nvmf_tgt_poll_group_000", 00:21:10.222 "listen_address": { 00:21:10.222 "trtype": "TCP", 00:21:10.222 "adrfam": "IPv4", 00:21:10.222 "traddr": "10.0.0.2", 00:21:10.222 "trsvcid": "4420" 00:21:10.222 }, 00:21:10.222 "peer_address": { 00:21:10.222 "trtype": "TCP", 00:21:10.222 "adrfam": "IPv4", 00:21:10.222 "traddr": "10.0.0.1", 00:21:10.222 "trsvcid": "59970" 00:21:10.222 }, 00:21:10.222 "auth": { 00:21:10.222 "state": "completed", 00:21:10.222 "digest": "sha384", 00:21:10.222 "dhgroup": "ffdhe4096" 00:21:10.222 } 00:21:10.222 } 00:21:10.222 ]' 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.222 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.223 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.479 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.479 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.479 18:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.736 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.668 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.925 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.182 00:21:12.182 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.182 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.182 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.440 { 00:21:12.440 "cntlid": 77, 00:21:12.440 "qid": 0, 00:21:12.440 "state": "enabled", 00:21:12.440 "thread": "nvmf_tgt_poll_group_000", 00:21:12.440 "listen_address": { 00:21:12.440 "trtype": "TCP", 00:21:12.440 "adrfam": "IPv4", 00:21:12.440 "traddr": "10.0.0.2", 00:21:12.440 "trsvcid": "4420" 00:21:12.440 }, 00:21:12.440 "peer_address": { 00:21:12.440 "trtype": "TCP", 00:21:12.440 "adrfam": "IPv4", 00:21:12.440 "traddr": "10.0.0.1", 00:21:12.440 "trsvcid": "60008" 00:21:12.440 }, 00:21:12.440 "auth": { 00:21:12.440 "state": "completed", 00:21:12.440 "digest": "sha384", 00:21:12.440 "dhgroup": "ffdhe4096" 00:21:12.440 } 00:21:12.440 } 00:21:12.440 ]' 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.440 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.697 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.697 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.697 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.697 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.697 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.954 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.886 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.452 00:21:14.452 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.452 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.452 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.709 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.709 { 00:21:14.710 "cntlid": 79, 00:21:14.710 "qid": 0, 00:21:14.710 "state": "enabled", 00:21:14.710 "thread": "nvmf_tgt_poll_group_000", 00:21:14.710 "listen_address": { 00:21:14.710 "trtype": "TCP", 00:21:14.710 "adrfam": "IPv4", 00:21:14.710 "traddr": "10.0.0.2", 00:21:14.710 "trsvcid": "4420" 00:21:14.710 }, 00:21:14.710 "peer_address": { 00:21:14.710 "trtype": "TCP", 00:21:14.710 "adrfam": "IPv4", 00:21:14.710 "traddr": "10.0.0.1", 00:21:14.710 "trsvcid": "45372" 00:21:14.710 }, 00:21:14.710 "auth": { 00:21:14.710 "state": "completed", 00:21:14.710 "digest": "sha384", 00:21:14.710 "dhgroup": "ffdhe4096" 00:21:14.710 } 00:21:14.710 } 00:21:14.710 ]' 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.710 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.967 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.920 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.180 18:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.744 00:21:16.744 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.744 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.744 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.744 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.001 { 00:21:17.001 "cntlid": 81, 00:21:17.001 "qid": 0, 00:21:17.001 "state": "enabled", 00:21:17.001 "thread": "nvmf_tgt_poll_group_000", 00:21:17.001 "listen_address": { 00:21:17.001 "trtype": "TCP", 00:21:17.001 "adrfam": "IPv4", 00:21:17.001 "traddr": "10.0.0.2", 00:21:17.001 "trsvcid": "4420" 00:21:17.001 }, 00:21:17.001 "peer_address": { 00:21:17.001 "trtype": "TCP", 00:21:17.001 "adrfam": "IPv4", 00:21:17.001 "traddr": "10.0.0.1", 00:21:17.001 "trsvcid": "45390" 00:21:17.001 }, 00:21:17.001 "auth": { 00:21:17.001 "state": "completed", 00:21:17.001 "digest": "sha384", 00:21:17.001 "dhgroup": "ffdhe6144" 00:21:17.001 } 00:21:17.001 } 00:21:17.001 ]' 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.001 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.258 18:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.189 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.446 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.447 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.447 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.447 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.011 00:21:19.011 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.011 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.011 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.268 { 00:21:19.268 "cntlid": 83, 00:21:19.268 "qid": 0, 00:21:19.268 "state": "enabled", 00:21:19.268 "thread": "nvmf_tgt_poll_group_000", 00:21:19.268 "listen_address": { 00:21:19.268 "trtype": "TCP", 00:21:19.268 "adrfam": "IPv4", 00:21:19.268 "traddr": "10.0.0.2", 00:21:19.268 "trsvcid": "4420" 00:21:19.268 }, 00:21:19.268 "peer_address": { 00:21:19.268 "trtype": "TCP", 00:21:19.268 "adrfam": "IPv4", 00:21:19.268 "traddr": "10.0.0.1", 00:21:19.268 "trsvcid": "45412" 00:21:19.268 }, 00:21:19.268 "auth": { 00:21:19.268 "state": "completed", 00:21:19.268 "digest": "sha384", 00:21:19.268 "dhgroup": "ffdhe6144" 00:21:19.268 } 00:21:19.268 } 00:21:19.268 ]' 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.268 18:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.526 18:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:20.458 18:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.458 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.458 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.459 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.459 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.459 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.459 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.459 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.715 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:20.715 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.716 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.280 00:21:21.280 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.280 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.280 18:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.539 { 00:21:21.539 "cntlid": 85, 00:21:21.539 "qid": 0, 00:21:21.539 "state": "enabled", 00:21:21.539 "thread": "nvmf_tgt_poll_group_000", 00:21:21.539 "listen_address": { 00:21:21.539 "trtype": "TCP", 00:21:21.539 "adrfam": "IPv4", 00:21:21.539 "traddr": "10.0.0.2", 00:21:21.539 "trsvcid": "4420" 00:21:21.539 }, 00:21:21.539 "peer_address": { 00:21:21.539 "trtype": "TCP", 00:21:21.539 "adrfam": "IPv4", 00:21:21.539 "traddr": "10.0.0.1", 00:21:21.539 "trsvcid": "45434" 00:21:21.539 }, 00:21:21.539 "auth": { 00:21:21.539 "state": "completed", 00:21:21.539 "digest": "sha384", 00:21:21.539 "dhgroup": "ffdhe6144" 00:21:21.539 } 00:21:21.539 } 00:21:21.539 ]' 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.539 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.797 18:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.733 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.990 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:22.991 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.991 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.991 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.991 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.991 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.554 00:21:23.554 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.554 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.554 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.812 { 00:21:23.812 "cntlid": 87, 00:21:23.812 "qid": 0, 00:21:23.812 "state": "enabled", 00:21:23.812 "thread": "nvmf_tgt_poll_group_000", 00:21:23.812 "listen_address": { 00:21:23.812 "trtype": "TCP", 00:21:23.812 "adrfam": "IPv4", 00:21:23.812 "traddr": "10.0.0.2", 00:21:23.812 "trsvcid": "4420" 00:21:23.812 }, 00:21:23.812 "peer_address": { 00:21:23.812 "trtype": "TCP", 00:21:23.812 "adrfam": "IPv4", 00:21:23.812 "traddr": "10.0.0.1", 00:21:23.812 "trsvcid": "38444" 00:21:23.812 }, 00:21:23.812 "auth": { 00:21:23.812 "state": "completed", 00:21:23.812 "digest": "sha384", 00:21:23.812 "dhgroup": "ffdhe6144" 00:21:23.812 } 00:21:23.812 } 00:21:23.812 ]' 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.812 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.069 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.069 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.069 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.069 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.069 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.326 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.259 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.517 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.448 00:21:26.448 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.448 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.448 18:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.448 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.705 { 00:21:26.705 "cntlid": 89, 00:21:26.705 "qid": 0, 00:21:26.705 "state": "enabled", 00:21:26.705 "thread": "nvmf_tgt_poll_group_000", 00:21:26.705 "listen_address": { 00:21:26.705 "trtype": "TCP", 00:21:26.705 "adrfam": "IPv4", 00:21:26.705 "traddr": "10.0.0.2", 00:21:26.705 "trsvcid": "4420" 00:21:26.705 }, 00:21:26.705 "peer_address": { 00:21:26.705 "trtype": "TCP", 00:21:26.705 "adrfam": "IPv4", 00:21:26.705 "traddr": "10.0.0.1", 00:21:26.705 "trsvcid": "38466" 00:21:26.705 }, 00:21:26.705 "auth": { 00:21:26.705 "state": "completed", 00:21:26.705 "digest": "sha384", 00:21:26.705 "dhgroup": "ffdhe8192" 00:21:26.705 } 00:21:26.705 } 00:21:26.705 ]' 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.705 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.962 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:27.893 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.893 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.893 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.893 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.894 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.894 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.894 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.894 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.151 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.083 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.083 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.341 { 00:21:29.341 "cntlid": 91, 00:21:29.341 "qid": 0, 00:21:29.341 "state": "enabled", 00:21:29.341 "thread": "nvmf_tgt_poll_group_000", 00:21:29.341 "listen_address": { 00:21:29.341 "trtype": "TCP", 00:21:29.341 "adrfam": "IPv4", 00:21:29.341 "traddr": "10.0.0.2", 00:21:29.341 "trsvcid": "4420" 00:21:29.341 }, 00:21:29.341 "peer_address": { 00:21:29.341 "trtype": "TCP", 00:21:29.341 "adrfam": "IPv4", 00:21:29.341 "traddr": "10.0.0.1", 00:21:29.341 "trsvcid": "38488" 00:21:29.341 }, 00:21:29.341 "auth": { 00:21:29.341 "state": "completed", 00:21:29.341 "digest": "sha384", 00:21:29.341 "dhgroup": "ffdhe8192" 00:21:29.341 } 00:21:29.341 } 00:21:29.341 ]' 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.341 18:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.598 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.531 18:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.788 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.720 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.720 { 00:21:31.720 "cntlid": 93, 00:21:31.720 "qid": 0, 00:21:31.720 "state": "enabled", 00:21:31.720 "thread": "nvmf_tgt_poll_group_000", 00:21:31.720 "listen_address": { 00:21:31.720 "trtype": "TCP", 00:21:31.720 "adrfam": "IPv4", 00:21:31.720 "traddr": "10.0.0.2", 00:21:31.720 "trsvcid": "4420" 00:21:31.720 }, 00:21:31.720 "peer_address": { 00:21:31.720 "trtype": "TCP", 00:21:31.720 "adrfam": "IPv4", 00:21:31.720 "traddr": "10.0.0.1", 00:21:31.720 "trsvcid": "38516" 00:21:31.720 }, 00:21:31.720 "auth": { 00:21:31.720 "state": "completed", 00:21:31.720 "digest": "sha384", 00:21:31.720 "dhgroup": "ffdhe8192" 00:21:31.720 } 00:21:31.720 } 00:21:31.720 ]' 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.720 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.977 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.977 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.978 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.978 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.978 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.239 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.218 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.219 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.151 00:21:34.151 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.151 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.151 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.408 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.408 { 00:21:34.408 "cntlid": 95, 00:21:34.408 "qid": 0, 00:21:34.408 "state": "enabled", 00:21:34.408 "thread": "nvmf_tgt_poll_group_000", 00:21:34.408 "listen_address": { 00:21:34.408 "trtype": "TCP", 00:21:34.408 "adrfam": "IPv4", 00:21:34.409 "traddr": "10.0.0.2", 00:21:34.409 "trsvcid": "4420" 00:21:34.409 }, 00:21:34.409 "peer_address": { 00:21:34.409 "trtype": "TCP", 00:21:34.409 "adrfam": "IPv4", 00:21:34.409 "traddr": "10.0.0.1", 00:21:34.409 "trsvcid": "38534" 00:21:34.409 }, 00:21:34.409 "auth": { 00:21:34.409 "state": "completed", 00:21:34.409 "digest": "sha384", 00:21:34.409 "dhgroup": "ffdhe8192" 00:21:34.409 } 00:21:34.409 } 00:21:34.409 ]' 00:21:34.409 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.409 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.409 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.409 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.409 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.409 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.409 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.409 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.666 18:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.597 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.858 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:35.858 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.859 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.116 00:21:36.116 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.116 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.116 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.374 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.374 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.374 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.374 18:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.374 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.374 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.374 { 00:21:36.374 "cntlid": 97, 00:21:36.374 "qid": 0, 00:21:36.374 "state": "enabled", 00:21:36.374 "thread": "nvmf_tgt_poll_group_000", 00:21:36.374 "listen_address": { 00:21:36.374 "trtype": "TCP", 00:21:36.374 "adrfam": "IPv4", 00:21:36.374 "traddr": "10.0.0.2", 00:21:36.374 "trsvcid": "4420" 00:21:36.374 }, 00:21:36.374 "peer_address": { 00:21:36.374 "trtype": "TCP", 00:21:36.374 "adrfam": "IPv4", 00:21:36.374 "traddr": "10.0.0.1", 00:21:36.374 "trsvcid": "38554" 00:21:36.374 }, 00:21:36.374 "auth": { 00:21:36.374 "state": "completed", 00:21:36.374 "digest": "sha512", 00:21:36.374 "dhgroup": "null" 00:21:36.374 } 00:21:36.374 } 00:21:36.374 ]' 00:21:36.374 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.632 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.889 18:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:37.819 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.819 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.820 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.077 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:38.077 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.078 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.335 00:21:38.335 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.335 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.335 18:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.593 { 00:21:38.593 "cntlid": 99, 00:21:38.593 "qid": 0, 00:21:38.593 "state": "enabled", 00:21:38.593 "thread": "nvmf_tgt_poll_group_000", 00:21:38.593 "listen_address": { 00:21:38.593 "trtype": "TCP", 00:21:38.593 "adrfam": "IPv4", 00:21:38.593 "traddr": "10.0.0.2", 00:21:38.593 "trsvcid": "4420" 00:21:38.593 }, 00:21:38.593 "peer_address": { 00:21:38.593 "trtype": "TCP", 00:21:38.593 "adrfam": "IPv4", 00:21:38.593 "traddr": "10.0.0.1", 00:21:38.593 "trsvcid": "38594" 00:21:38.593 }, 00:21:38.593 "auth": { 00:21:38.593 "state": "completed", 00:21:38.593 "digest": "sha512", 00:21:38.593 "dhgroup": "null" 00:21:38.593 } 00:21:38.593 } 00:21:38.593 ]' 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:38.593 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.850 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.850 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.850 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.107 18:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.040 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.297 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.298 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.298 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.298 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.298 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.554 00:21:40.554 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.555 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.555 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.812 { 00:21:40.812 "cntlid": 101, 00:21:40.812 "qid": 0, 00:21:40.812 "state": "enabled", 00:21:40.812 "thread": "nvmf_tgt_poll_group_000", 00:21:40.812 "listen_address": { 00:21:40.812 "trtype": "TCP", 00:21:40.812 "adrfam": "IPv4", 00:21:40.812 "traddr": "10.0.0.2", 00:21:40.812 "trsvcid": "4420" 00:21:40.812 }, 00:21:40.812 "peer_address": { 00:21:40.812 "trtype": "TCP", 00:21:40.812 "adrfam": "IPv4", 00:21:40.812 "traddr": "10.0.0.1", 00:21:40.812 "trsvcid": "38618" 00:21:40.812 }, 00:21:40.812 "auth": { 00:21:40.812 "state": "completed", 00:21:40.812 "digest": "sha512", 00:21:40.812 "dhgroup": "null" 00:21:40.812 } 00:21:40.812 } 00:21:40.812 ]' 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.812 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.069 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:41.069 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.069 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.069 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.069 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.326 18:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.257 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.515 18:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.772 00:21:42.772 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.772 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.772 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.029 { 00:21:43.029 "cntlid": 103, 00:21:43.029 "qid": 0, 00:21:43.029 "state": "enabled", 00:21:43.029 "thread": "nvmf_tgt_poll_group_000", 00:21:43.029 "listen_address": { 00:21:43.029 "trtype": "TCP", 00:21:43.029 "adrfam": "IPv4", 00:21:43.029 "traddr": "10.0.0.2", 00:21:43.029 "trsvcid": "4420" 00:21:43.029 }, 00:21:43.029 "peer_address": { 00:21:43.029 "trtype": "TCP", 00:21:43.029 "adrfam": "IPv4", 00:21:43.029 "traddr": "10.0.0.1", 00:21:43.029 "trsvcid": "57404" 00:21:43.029 }, 00:21:43.029 "auth": { 00:21:43.029 "state": "completed", 00:21:43.029 "digest": "sha512", 00:21:43.029 "dhgroup": "null" 00:21:43.029 } 00:21:43.029 } 00:21:43.029 ]' 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.029 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.288 18:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:44.217 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.218 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.783 00:21:44.783 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.783 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.783 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.040 { 00:21:45.040 "cntlid": 105, 00:21:45.040 "qid": 0, 00:21:45.040 "state": "enabled", 00:21:45.040 "thread": "nvmf_tgt_poll_group_000", 00:21:45.040 "listen_address": { 00:21:45.040 "trtype": "TCP", 00:21:45.040 "adrfam": "IPv4", 00:21:45.040 "traddr": "10.0.0.2", 00:21:45.040 "trsvcid": "4420" 00:21:45.040 }, 00:21:45.040 "peer_address": { 00:21:45.040 "trtype": "TCP", 00:21:45.040 "adrfam": "IPv4", 00:21:45.040 "traddr": "10.0.0.1", 00:21:45.040 "trsvcid": "57440" 00:21:45.040 }, 00:21:45.040 "auth": { 00:21:45.040 "state": "completed", 00:21:45.040 "digest": "sha512", 00:21:45.040 "dhgroup": "ffdhe2048" 00:21:45.040 } 00:21:45.040 } 00:21:45.040 ]' 00:21:45.040 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.297 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.555 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.486 18:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.744 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.001 00:21:47.001 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.001 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.001 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.259 { 00:21:47.259 "cntlid": 107, 00:21:47.259 "qid": 0, 00:21:47.259 "state": "enabled", 00:21:47.259 "thread": "nvmf_tgt_poll_group_000", 00:21:47.259 "listen_address": { 00:21:47.259 "trtype": "TCP", 00:21:47.259 "adrfam": "IPv4", 00:21:47.259 "traddr": "10.0.0.2", 00:21:47.259 "trsvcid": "4420" 00:21:47.259 }, 00:21:47.259 "peer_address": { 00:21:47.259 "trtype": "TCP", 00:21:47.259 "adrfam": "IPv4", 00:21:47.259 "traddr": "10.0.0.1", 00:21:47.259 "trsvcid": "57470" 00:21:47.259 }, 00:21:47.259 "auth": { 00:21:47.259 "state": "completed", 00:21:47.259 "digest": "sha512", 00:21:47.259 "dhgroup": "ffdhe2048" 00:21:47.259 } 00:21:47.259 } 00:21:47.259 ]' 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.259 18:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.517 18:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:48.449 18:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.449 18:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.449 18:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.449 18:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.449 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.449 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.449 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.449 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.706 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.963 00:21:48.963 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.963 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.963 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.220 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.220 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.221 { 00:21:49.221 "cntlid": 109, 00:21:49.221 "qid": 0, 00:21:49.221 "state": "enabled", 00:21:49.221 "thread": "nvmf_tgt_poll_group_000", 00:21:49.221 "listen_address": { 00:21:49.221 "trtype": "TCP", 00:21:49.221 "adrfam": "IPv4", 00:21:49.221 "traddr": "10.0.0.2", 00:21:49.221 "trsvcid": "4420" 00:21:49.221 }, 00:21:49.221 "peer_address": { 00:21:49.221 "trtype": "TCP", 00:21:49.221 "adrfam": "IPv4", 00:21:49.221 "traddr": "10.0.0.1", 00:21:49.221 "trsvcid": "57482" 00:21:49.221 }, 00:21:49.221 "auth": { 00:21:49.221 "state": "completed", 00:21:49.221 "digest": "sha512", 00:21:49.221 "dhgroup": "ffdhe2048" 00:21:49.221 } 00:21:49.221 } 00:21:49.221 ]' 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.221 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.478 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.478 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.478 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.478 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.478 18:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.735 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.667 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.232 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.232 { 00:21:51.232 "cntlid": 111, 00:21:51.232 "qid": 0, 00:21:51.232 "state": "enabled", 00:21:51.232 "thread": "nvmf_tgt_poll_group_000", 00:21:51.232 "listen_address": { 00:21:51.232 "trtype": "TCP", 00:21:51.232 "adrfam": "IPv4", 00:21:51.232 "traddr": "10.0.0.2", 00:21:51.232 "trsvcid": "4420" 00:21:51.232 }, 00:21:51.232 "peer_address": { 00:21:51.232 "trtype": "TCP", 00:21:51.232 "adrfam": "IPv4", 00:21:51.232 "traddr": "10.0.0.1", 00:21:51.232 "trsvcid": "57516" 00:21:51.232 }, 00:21:51.232 "auth": { 00:21:51.232 "state": "completed", 00:21:51.232 "digest": "sha512", 00:21:51.232 "dhgroup": "ffdhe2048" 00:21:51.232 } 00:21:51.232 } 00:21:51.232 ]' 00:21:51.232 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.490 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.749 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.682 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.940 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.940 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.940 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.197 00:21:53.197 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.197 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.197 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.455 { 00:21:53.455 "cntlid": 113, 00:21:53.455 "qid": 0, 00:21:53.455 "state": "enabled", 00:21:53.455 "thread": "nvmf_tgt_poll_group_000", 00:21:53.455 "listen_address": { 00:21:53.455 "trtype": "TCP", 00:21:53.455 "adrfam": "IPv4", 00:21:53.455 "traddr": "10.0.0.2", 00:21:53.455 "trsvcid": "4420" 00:21:53.455 }, 00:21:53.455 "peer_address": { 00:21:53.455 "trtype": "TCP", 00:21:53.455 "adrfam": "IPv4", 00:21:53.455 "traddr": "10.0.0.1", 00:21:53.455 "trsvcid": "40078" 00:21:53.455 }, 00:21:53.455 "auth": { 00:21:53.455 "state": "completed", 00:21:53.455 "digest": "sha512", 00:21:53.455 "dhgroup": "ffdhe3072" 00:21:53.455 } 00:21:53.455 } 00:21:53.455 ]' 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.455 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.455 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.455 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.455 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.455 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.455 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.711 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.644 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.908 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.472 00:21:55.472 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.472 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.472 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.472 { 00:21:55.472 "cntlid": 115, 00:21:55.472 "qid": 0, 00:21:55.472 "state": "enabled", 00:21:55.472 "thread": "nvmf_tgt_poll_group_000", 00:21:55.472 "listen_address": { 00:21:55.472 "trtype": "TCP", 00:21:55.472 "adrfam": "IPv4", 00:21:55.472 "traddr": "10.0.0.2", 00:21:55.472 "trsvcid": "4420" 00:21:55.472 }, 00:21:55.472 "peer_address": { 00:21:55.472 "trtype": "TCP", 00:21:55.472 "adrfam": "IPv4", 00:21:55.472 "traddr": "10.0.0.1", 00:21:55.472 "trsvcid": "40112" 00:21:55.472 }, 00:21:55.472 "auth": { 00:21:55.472 "state": "completed", 00:21:55.472 "digest": "sha512", 00:21:55.472 "dhgroup": "ffdhe3072" 00:21:55.472 } 00:21:55.472 } 00:21:55.472 ]' 00:21:55.472 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.730 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.987 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.918 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.175 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.432 00:21:57.432 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.432 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.432 18:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.690 { 00:21:57.690 "cntlid": 117, 00:21:57.690 "qid": 0, 00:21:57.690 "state": "enabled", 00:21:57.690 "thread": "nvmf_tgt_poll_group_000", 00:21:57.690 "listen_address": { 00:21:57.690 "trtype": "TCP", 00:21:57.690 "adrfam": "IPv4", 00:21:57.690 "traddr": "10.0.0.2", 00:21:57.690 "trsvcid": "4420" 00:21:57.690 }, 00:21:57.690 "peer_address": { 00:21:57.690 "trtype": "TCP", 00:21:57.690 "adrfam": "IPv4", 00:21:57.690 "traddr": "10.0.0.1", 00:21:57.690 "trsvcid": "40148" 00:21:57.690 }, 00:21:57.690 "auth": { 00:21:57.690 "state": "completed", 00:21:57.690 "digest": "sha512", 00:21:57.690 "dhgroup": "ffdhe3072" 00:21:57.690 } 00:21:57.690 } 00:21:57.690 ]' 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.690 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.947 18:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.879 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.136 18:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.393 00:21:59.393 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.393 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.393 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.651 { 00:21:59.651 "cntlid": 119, 00:21:59.651 "qid": 0, 00:21:59.651 "state": "enabled", 00:21:59.651 "thread": "nvmf_tgt_poll_group_000", 00:21:59.651 "listen_address": { 00:21:59.651 "trtype": "TCP", 00:21:59.651 "adrfam": "IPv4", 00:21:59.651 "traddr": "10.0.0.2", 00:21:59.651 "trsvcid": "4420" 00:21:59.651 }, 00:21:59.651 "peer_address": { 00:21:59.651 "trtype": "TCP", 00:21:59.651 "adrfam": "IPv4", 00:21:59.651 "traddr": "10.0.0.1", 00:21:59.651 "trsvcid": "40180" 00:21:59.651 }, 00:21:59.651 "auth": { 00:21:59.651 "state": "completed", 00:21:59.651 "digest": "sha512", 00:21:59.651 "dhgroup": "ffdhe3072" 00:21:59.651 } 00:21:59.651 } 00:21:59.651 ]' 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.651 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.909 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.909 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.909 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.909 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.909 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.166 18:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.099 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.100 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.665 00:22:01.665 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.665 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.665 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.923 { 00:22:01.923 "cntlid": 121, 00:22:01.923 "qid": 0, 00:22:01.923 "state": "enabled", 00:22:01.923 "thread": "nvmf_tgt_poll_group_000", 00:22:01.923 "listen_address": { 00:22:01.923 "trtype": "TCP", 00:22:01.923 "adrfam": "IPv4", 00:22:01.923 "traddr": "10.0.0.2", 00:22:01.923 "trsvcid": "4420" 00:22:01.923 }, 00:22:01.923 "peer_address": { 00:22:01.923 "trtype": "TCP", 00:22:01.923 "adrfam": "IPv4", 00:22:01.923 "traddr": "10.0.0.1", 00:22:01.923 "trsvcid": "40222" 00:22:01.923 }, 00:22:01.923 "auth": { 00:22:01.923 "state": "completed", 00:22:01.923 "digest": "sha512", 00:22:01.923 "dhgroup": "ffdhe4096" 00:22:01.923 } 00:22:01.923 } 00:22:01.923 ]' 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.923 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.180 18:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.113 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.371 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.628 00:22:03.628 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.628 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.628 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.886 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.886 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.886 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.886 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.144 { 00:22:04.144 "cntlid": 123, 00:22:04.144 "qid": 0, 00:22:04.144 "state": "enabled", 00:22:04.144 "thread": "nvmf_tgt_poll_group_000", 00:22:04.144 "listen_address": { 00:22:04.144 "trtype": "TCP", 00:22:04.144 "adrfam": "IPv4", 00:22:04.144 "traddr": "10.0.0.2", 00:22:04.144 "trsvcid": "4420" 00:22:04.144 }, 00:22:04.144 "peer_address": { 00:22:04.144 "trtype": "TCP", 00:22:04.144 "adrfam": "IPv4", 00:22:04.144 "traddr": "10.0.0.1", 00:22:04.144 "trsvcid": "35224" 00:22:04.144 }, 00:22:04.144 "auth": { 00:22:04.144 "state": "completed", 00:22:04.144 "digest": "sha512", 00:22:04.144 "dhgroup": "ffdhe4096" 00:22:04.144 } 00:22:04.144 } 00:22:04.144 ]' 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.144 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.401 18:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.333 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.591 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.873 00:22:05.873 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.873 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.873 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.144 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.144 { 00:22:06.144 "cntlid": 125, 00:22:06.144 "qid": 0, 00:22:06.144 "state": "enabled", 00:22:06.144 "thread": "nvmf_tgt_poll_group_000", 00:22:06.144 "listen_address": { 00:22:06.144 "trtype": "TCP", 00:22:06.144 "adrfam": "IPv4", 00:22:06.144 "traddr": "10.0.0.2", 00:22:06.144 "trsvcid": "4420" 00:22:06.144 }, 00:22:06.144 "peer_address": { 00:22:06.144 "trtype": "TCP", 00:22:06.144 "adrfam": "IPv4", 00:22:06.144 "traddr": "10.0.0.1", 00:22:06.144 "trsvcid": "35238" 00:22:06.144 }, 00:22:06.145 "auth": { 00:22:06.145 "state": "completed", 00:22:06.145 "digest": "sha512", 00:22:06.145 "dhgroup": "ffdhe4096" 00:22:06.145 } 00:22:06.145 } 00:22:06.145 ]' 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.145 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.401 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.334 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.592 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.849 00:22:07.849 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.849 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.849 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.107 { 00:22:08.107 "cntlid": 127, 00:22:08.107 "qid": 0, 00:22:08.107 "state": "enabled", 00:22:08.107 "thread": "nvmf_tgt_poll_group_000", 00:22:08.107 "listen_address": { 00:22:08.107 "trtype": "TCP", 00:22:08.107 "adrfam": "IPv4", 00:22:08.107 "traddr": "10.0.0.2", 00:22:08.107 "trsvcid": "4420" 00:22:08.107 }, 00:22:08.107 "peer_address": { 00:22:08.107 "trtype": "TCP", 00:22:08.107 "adrfam": "IPv4", 00:22:08.107 "traddr": "10.0.0.1", 00:22:08.107 "trsvcid": "35266" 00:22:08.107 }, 00:22:08.107 "auth": { 00:22:08.107 "state": "completed", 00:22:08.107 "digest": "sha512", 00:22:08.107 "dhgroup": "ffdhe4096" 00:22:08.107 } 00:22:08.107 } 00:22:08.107 ]' 00:22:08.107 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.365 18:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.622 18:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:22:09.554 18:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.554 18:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.554 18:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.554 18:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.554 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.554 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.554 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.554 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.554 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.812 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.378 00:22:10.378 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.378 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.378 18:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.636 { 00:22:10.636 "cntlid": 129, 00:22:10.636 "qid": 0, 00:22:10.636 "state": "enabled", 00:22:10.636 "thread": "nvmf_tgt_poll_group_000", 00:22:10.636 "listen_address": { 00:22:10.636 "trtype": "TCP", 00:22:10.636 "adrfam": "IPv4", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "trsvcid": "4420" 00:22:10.636 }, 00:22:10.636 "peer_address": { 00:22:10.636 "trtype": "TCP", 00:22:10.636 "adrfam": "IPv4", 00:22:10.636 "traddr": "10.0.0.1", 00:22:10.636 "trsvcid": "35300" 00:22:10.636 }, 00:22:10.636 "auth": { 00:22:10.636 "state": "completed", 00:22:10.636 "digest": "sha512", 00:22:10.636 "dhgroup": "ffdhe6144" 00:22:10.636 } 00:22:10.636 } 00:22:10.636 ]' 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.636 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.893 18:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.826 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.084 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.649 00:22:12.649 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.649 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.649 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.906 { 00:22:12.906 "cntlid": 131, 00:22:12.906 "qid": 0, 00:22:12.906 "state": "enabled", 00:22:12.906 "thread": "nvmf_tgt_poll_group_000", 00:22:12.906 "listen_address": { 00:22:12.906 "trtype": "TCP", 00:22:12.906 "adrfam": "IPv4", 00:22:12.906 "traddr": "10.0.0.2", 00:22:12.906 "trsvcid": "4420" 00:22:12.906 }, 00:22:12.906 "peer_address": { 00:22:12.906 "trtype": "TCP", 00:22:12.906 "adrfam": "IPv4", 00:22:12.906 "traddr": "10.0.0.1", 00:22:12.906 "trsvcid": "35308" 00:22:12.906 }, 00:22:12.906 "auth": { 00:22:12.906 "state": "completed", 00:22:12.906 "digest": "sha512", 00:22:12.906 "dhgroup": "ffdhe6144" 00:22:12.906 } 00:22:12.906 } 00:22:12.906 ]' 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.906 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.163 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.096 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.354 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.919 00:22:14.919 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.920 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.920 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.177 { 00:22:15.177 "cntlid": 133, 00:22:15.177 "qid": 0, 00:22:15.177 "state": "enabled", 00:22:15.177 "thread": "nvmf_tgt_poll_group_000", 00:22:15.177 "listen_address": { 00:22:15.177 "trtype": "TCP", 00:22:15.177 "adrfam": "IPv4", 00:22:15.177 "traddr": "10.0.0.2", 00:22:15.177 "trsvcid": "4420" 00:22:15.177 }, 00:22:15.177 "peer_address": { 00:22:15.177 "trtype": "TCP", 00:22:15.177 "adrfam": "IPv4", 00:22:15.177 "traddr": "10.0.0.1", 00:22:15.177 "trsvcid": "45208" 00:22:15.177 }, 00:22:15.177 "auth": { 00:22:15.177 "state": "completed", 00:22:15.177 "digest": "sha512", 00:22:15.177 "dhgroup": "ffdhe6144" 00:22:15.177 } 00:22:15.177 } 00:22:15.177 ]' 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.177 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.434 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.363 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.620 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:16.620 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.620 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.620 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:16.620 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.621 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.187 00:22:17.187 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.187 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.187 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.446 { 00:22:17.446 "cntlid": 135, 00:22:17.446 "qid": 0, 00:22:17.446 "state": "enabled", 00:22:17.446 "thread": "nvmf_tgt_poll_group_000", 00:22:17.446 "listen_address": { 00:22:17.446 "trtype": "TCP", 00:22:17.446 "adrfam": "IPv4", 00:22:17.446 "traddr": "10.0.0.2", 00:22:17.446 "trsvcid": "4420" 00:22:17.446 }, 00:22:17.446 "peer_address": { 00:22:17.446 "trtype": "TCP", 00:22:17.446 "adrfam": "IPv4", 00:22:17.446 "traddr": "10.0.0.1", 00:22:17.446 "trsvcid": "45226" 00:22:17.446 }, 00:22:17.446 "auth": { 00:22:17.446 "state": "completed", 00:22:17.446 "digest": "sha512", 00:22:17.446 "dhgroup": "ffdhe6144" 00:22:17.446 } 00:22:17.446 } 00:22:17.446 ]' 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.446 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.446 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.446 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.446 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.446 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.446 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.704 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.637 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.895 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.827 00:22:19.827 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.827 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.828 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.085 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.086 { 00:22:20.086 "cntlid": 137, 00:22:20.086 "qid": 0, 00:22:20.086 "state": "enabled", 00:22:20.086 "thread": "nvmf_tgt_poll_group_000", 00:22:20.086 "listen_address": { 00:22:20.086 "trtype": "TCP", 00:22:20.086 "adrfam": "IPv4", 00:22:20.086 "traddr": "10.0.0.2", 00:22:20.086 "trsvcid": "4420" 00:22:20.086 }, 00:22:20.086 "peer_address": { 00:22:20.086 "trtype": "TCP", 00:22:20.086 "adrfam": "IPv4", 00:22:20.086 "traddr": "10.0.0.1", 00:22:20.086 "trsvcid": "45246" 00:22:20.086 }, 00:22:20.086 "auth": { 00:22:20.086 "state": "completed", 00:22:20.086 "digest": "sha512", 00:22:20.086 "dhgroup": "ffdhe8192" 00:22:20.086 } 00:22:20.086 } 00:22:20.086 ]' 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.086 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.343 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.275 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.276 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.276 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.533 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.498 00:22:22.498 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.498 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.498 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.756 { 00:22:22.756 "cntlid": 139, 00:22:22.756 "qid": 0, 00:22:22.756 "state": "enabled", 00:22:22.756 "thread": "nvmf_tgt_poll_group_000", 00:22:22.756 "listen_address": { 00:22:22.756 "trtype": "TCP", 00:22:22.756 "adrfam": "IPv4", 00:22:22.756 "traddr": "10.0.0.2", 00:22:22.756 "trsvcid": "4420" 00:22:22.756 }, 00:22:22.756 "peer_address": { 00:22:22.756 "trtype": "TCP", 00:22:22.756 "adrfam": "IPv4", 00:22:22.756 "traddr": "10.0.0.1", 00:22:22.756 "trsvcid": "45282" 00:22:22.756 }, 00:22:22.756 "auth": { 00:22:22.756 "state": "completed", 00:22:22.756 "digest": "sha512", 00:22:22.756 "dhgroup": "ffdhe8192" 00:22:22.756 } 00:22:22.756 } 00:22:22.756 ]' 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.756 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.014 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGJkMzEzMDdmOGI0YjU3MDlmMWZhMDA2OTRjY2MyODi849D5: --dhchap-ctrl-secret DHHC-1:02:MDI1NmE4MmJhOWEwNWZmMjNiYTM5NjdiN2NjMzQ5MmMyNTNhNWIxODk3MzBlYzM5IRf+ww==: 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.947 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.205 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.139 00:22:25.139 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.139 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.139 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.397 { 00:22:25.397 "cntlid": 141, 00:22:25.397 "qid": 0, 00:22:25.397 "state": "enabled", 00:22:25.397 "thread": "nvmf_tgt_poll_group_000", 00:22:25.397 "listen_address": { 00:22:25.397 "trtype": "TCP", 00:22:25.397 "adrfam": "IPv4", 00:22:25.397 "traddr": "10.0.0.2", 00:22:25.397 "trsvcid": "4420" 00:22:25.397 }, 00:22:25.397 "peer_address": { 00:22:25.397 "trtype": "TCP", 00:22:25.397 "adrfam": "IPv4", 00:22:25.397 "traddr": "10.0.0.1", 00:22:25.397 "trsvcid": "50132" 00:22:25.397 }, 00:22:25.397 "auth": { 00:22:25.397 "state": "completed", 00:22:25.397 "digest": "sha512", 00:22:25.397 "dhgroup": "ffdhe8192" 00:22:25.397 } 00:22:25.397 } 00:22:25.397 ]' 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.397 18:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.656 18:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTlkYmZkNmM1ZTBiMTRjOTBmN2UxZTNiNzhkOGY2MGMzMzRiMzBmNzg3OWI4NmIyPOXXuA==: --dhchap-ctrl-secret DHHC-1:01:N2E4ZDE0MDc5ZDFlYzE4NTkyZmVkYjM5YWU0Yzg3MWRQzGSl: 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.587 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.845 18:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.779 00:22:27.779 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.779 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.779 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.037 { 00:22:28.037 "cntlid": 143, 00:22:28.037 "qid": 0, 00:22:28.037 "state": "enabled", 00:22:28.037 "thread": "nvmf_tgt_poll_group_000", 00:22:28.037 "listen_address": { 00:22:28.037 "trtype": "TCP", 00:22:28.037 "adrfam": "IPv4", 00:22:28.037 "traddr": "10.0.0.2", 00:22:28.037 "trsvcid": "4420" 00:22:28.037 }, 00:22:28.037 "peer_address": { 00:22:28.037 "trtype": "TCP", 00:22:28.037 "adrfam": "IPv4", 00:22:28.037 "traddr": "10.0.0.1", 00:22:28.037 "trsvcid": "50152" 00:22:28.037 }, 00:22:28.037 "auth": { 00:22:28.037 "state": "completed", 00:22:28.037 "digest": "sha512", 00:22:28.037 "dhgroup": "ffdhe8192" 00:22:28.037 } 00:22:28.037 } 00:22:28.037 ]' 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.037 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.295 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.229 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.487 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.421 00:22:30.421 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.421 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.421 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.679 { 00:22:30.679 "cntlid": 145, 00:22:30.679 "qid": 0, 00:22:30.679 "state": "enabled", 00:22:30.679 "thread": "nvmf_tgt_poll_group_000", 00:22:30.679 "listen_address": { 00:22:30.679 "trtype": "TCP", 00:22:30.679 "adrfam": "IPv4", 00:22:30.679 "traddr": "10.0.0.2", 00:22:30.679 "trsvcid": "4420" 00:22:30.679 }, 00:22:30.679 "peer_address": { 00:22:30.679 "trtype": "TCP", 00:22:30.679 "adrfam": "IPv4", 00:22:30.679 "traddr": "10.0.0.1", 00:22:30.679 "trsvcid": "50186" 00:22:30.679 }, 00:22:30.679 "auth": { 00:22:30.679 "state": "completed", 00:22:30.679 "digest": "sha512", 00:22:30.679 "dhgroup": "ffdhe8192" 00:22:30.679 } 00:22:30.679 } 00:22:30.679 ]' 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.679 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.937 18:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjY1OTkwNmQ0NzE2YWY0ODNhZjdhZGM0YzBjYThiNzZlOWNjYWQ0MWQwMzAwYzZlaka5/g==: --dhchap-ctrl-secret DHHC-1:03:ZmExYTFkY2JhYmU4MDQyN2RhNDY5YjZhOTY0ZWEzMDVkYjAyNDU1MWQ3YzNkOTFhYjJjN2U0MmExMWJjMDAxY5r9jkQ=: 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.871 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:32.805 request: 00:22:32.805 { 00:22:32.805 "name": "nvme0", 00:22:32.805 "trtype": "tcp", 00:22:32.805 "traddr": "10.0.0.2", 00:22:32.805 "adrfam": "ipv4", 00:22:32.805 "trsvcid": "4420", 00:22:32.805 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.805 "prchk_reftag": false, 00:22:32.805 "prchk_guard": false, 00:22:32.805 "hdgst": false, 00:22:32.805 "ddgst": false, 00:22:32.805 "dhchap_key": "key2", 00:22:32.805 "method": "bdev_nvme_attach_controller", 00:22:32.805 "req_id": 1 00:22:32.805 } 00:22:32.805 Got JSON-RPC error response 00:22:32.805 response: 00:22:32.805 { 00:22:32.805 "code": -5, 00:22:32.805 "message": "Input/output error" 00:22:32.805 } 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.805 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.372 request: 00:22:33.372 { 00:22:33.372 "name": "nvme0", 00:22:33.372 "trtype": "tcp", 00:22:33.372 "traddr": "10.0.0.2", 00:22:33.372 "adrfam": "ipv4", 00:22:33.372 "trsvcid": "4420", 00:22:33.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.372 "prchk_reftag": false, 00:22:33.372 "prchk_guard": false, 00:22:33.372 "hdgst": false, 00:22:33.372 "ddgst": false, 00:22:33.372 "dhchap_key": "key1", 00:22:33.372 "dhchap_ctrlr_key": "ckey2", 00:22:33.372 "method": "bdev_nvme_attach_controller", 00:22:33.372 "req_id": 1 00:22:33.372 } 00:22:33.372 Got JSON-RPC error response 00:22:33.372 response: 00:22:33.372 { 00:22:33.372 "code": -5, 00:22:33.372 "message": "Input/output error" 00:22:33.372 } 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.372 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.373 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.307 request: 00:22:34.307 { 00:22:34.307 "name": "nvme0", 00:22:34.307 "trtype": "tcp", 00:22:34.307 "traddr": "10.0.0.2", 00:22:34.307 "adrfam": "ipv4", 00:22:34.307 "trsvcid": "4420", 00:22:34.307 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.307 "prchk_reftag": false, 00:22:34.307 "prchk_guard": false, 00:22:34.307 "hdgst": false, 00:22:34.307 "ddgst": false, 00:22:34.307 "dhchap_key": "key1", 00:22:34.307 "dhchap_ctrlr_key": "ckey1", 00:22:34.307 "method": "bdev_nvme_attach_controller", 00:22:34.307 "req_id": 1 00:22:34.307 } 00:22:34.307 Got JSON-RPC error response 00:22:34.307 response: 00:22:34.307 { 00:22:34.307 "code": -5, 00:22:34.307 "message": "Input/output error" 00:22:34.307 } 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2356814 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2356814 ']' 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2356814 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356814 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356814' 00:22:34.307 killing process with pid 2356814 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2356814 00:22:34.307 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2356814 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2378414 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2378414 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2378414 ']' 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.564 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2378414 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2378414 ']' 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.821 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.822 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.079 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.080 18:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.011 00:22:36.011 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.011 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.011 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.268 { 00:22:36.268 "cntlid": 1, 00:22:36.268 "qid": 0, 00:22:36.268 "state": "enabled", 00:22:36.268 "thread": "nvmf_tgt_poll_group_000", 00:22:36.268 "listen_address": { 00:22:36.268 "trtype": "TCP", 00:22:36.268 "adrfam": "IPv4", 00:22:36.268 "traddr": "10.0.0.2", 00:22:36.268 "trsvcid": "4420" 00:22:36.268 }, 00:22:36.268 "peer_address": { 00:22:36.268 "trtype": "TCP", 00:22:36.268 "adrfam": "IPv4", 00:22:36.268 "traddr": "10.0.0.1", 00:22:36.268 "trsvcid": "50096" 00:22:36.268 }, 00:22:36.268 "auth": { 00:22:36.268 "state": "completed", 00:22:36.268 "digest": "sha512", 00:22:36.268 "dhgroup": "ffdhe8192" 00:22:36.268 } 00:22:36.268 } 00:22:36.268 ]' 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.268 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.525 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGZhYmU1MTVhNDI2MjllYTEyYWNiZDhlMWM4NzllYTBlYzVjOTM4OTViMjVjYmM0ZWFhZTNkYmI4N2IzZGNmNLYE/cQ=: 00:22:37.456 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.713 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.970 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.970 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.970 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.970 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.970 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.228 request: 00:22:38.228 { 00:22:38.228 "name": "nvme0", 00:22:38.228 "trtype": "tcp", 00:22:38.228 "traddr": "10.0.0.2", 00:22:38.228 "adrfam": "ipv4", 00:22:38.228 "trsvcid": "4420", 00:22:38.228 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.228 "prchk_reftag": false, 00:22:38.228 "prchk_guard": false, 00:22:38.228 "hdgst": false, 00:22:38.228 "ddgst": false, 00:22:38.228 "dhchap_key": "key3", 00:22:38.228 "method": "bdev_nvme_attach_controller", 00:22:38.228 "req_id": 1 00:22:38.228 } 00:22:38.228 Got JSON-RPC error response 00:22:38.228 response: 00:22:38.228 { 00:22:38.228 "code": -5, 00:22:38.228 "message": "Input/output error" 00:22:38.228 } 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.228 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.485 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.741 request: 00:22:38.741 { 00:22:38.741 "name": "nvme0", 00:22:38.741 "trtype": "tcp", 00:22:38.741 "traddr": "10.0.0.2", 00:22:38.741 "adrfam": "ipv4", 00:22:38.741 "trsvcid": "4420", 00:22:38.741 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.741 "prchk_reftag": false, 00:22:38.741 "prchk_guard": false, 00:22:38.741 "hdgst": false, 00:22:38.741 "ddgst": false, 00:22:38.741 "dhchap_key": "key3", 00:22:38.741 "method": "bdev_nvme_attach_controller", 00:22:38.741 "req_id": 1 00:22:38.741 } 00:22:38.741 Got JSON-RPC error response 00:22:38.741 response: 00:22:38.741 { 00:22:38.741 "code": -5, 00:22:38.741 "message": "Input/output error" 00:22:38.741 } 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.741 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.000 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.258 request: 00:22:39.258 { 00:22:39.258 "name": "nvme0", 00:22:39.258 "trtype": "tcp", 00:22:39.258 "traddr": "10.0.0.2", 00:22:39.258 "adrfam": "ipv4", 00:22:39.258 "trsvcid": "4420", 00:22:39.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:39.258 "prchk_reftag": false, 00:22:39.258 "prchk_guard": false, 00:22:39.258 "hdgst": false, 00:22:39.258 "ddgst": false, 00:22:39.258 "dhchap_key": "key0", 00:22:39.258 "dhchap_ctrlr_key": "key1", 00:22:39.258 "method": "bdev_nvme_attach_controller", 00:22:39.258 "req_id": 1 00:22:39.258 } 00:22:39.258 Got JSON-RPC error response 00:22:39.258 response: 00:22:39.258 { 00:22:39.258 "code": -5, 00:22:39.258 "message": "Input/output error" 00:22:39.258 } 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.258 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.516 00:22:39.516 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:39.516 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:39.516 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.773 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.773 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.773 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2356954 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2356954 ']' 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2356954 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:40.030 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356954 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356954' 00:22:40.031 killing process with pid 2356954 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2356954 00:22:40.031 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2356954 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.597 rmmod nvme_tcp 00:22:40.597 rmmod nvme_fabrics 00:22:40.597 rmmod nvme_keyring 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2378414 ']' 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2378414 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2378414 ']' 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2378414 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2378414 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2378414' 00:22:40.597 killing process with pid 2378414 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2378414 00:22:40.597 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2378414 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.896 18:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DCs /tmp/spdk.key-sha256.70y /tmp/spdk.key-sha384.HQf /tmp/spdk.key-sha512.A3h /tmp/spdk.key-sha512.tjp /tmp/spdk.key-sha384.qM9 /tmp/spdk.key-sha256.Nas '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:42.801 00:22:42.801 real 3m0.468s 00:22:42.801 user 7m1.617s 00:22:42.801 sys 0m25.122s 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.801 ************************************ 00:22:42.801 END TEST nvmf_auth_target 00:22:42.801 ************************************ 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.801 ************************************ 00:22:42.801 START TEST nvmf_bdevio_no_huge 00:22:42.801 ************************************ 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.801 * Looking for test storage... 00:22:42.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.801 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.060 18:10:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.967 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.967 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.967 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.968 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:22:45.228 00:22:45.228 --- 10.0.0.2 ping statistics --- 00:22:45.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.228 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:22:45.228 00:22:45.228 --- 10.0.0.1 ping statistics --- 00:22:45.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.228 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2381169 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2381169 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2381169 ']' 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.228 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.228 [2024-07-23 18:10:52.723900] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:22:45.228 [2024-07-23 18:10:52.724000] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:45.228 [2024-07-23 18:10:52.797515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.228 [2024-07-23 18:10:52.878604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.228 [2024-07-23 18:10:52.878676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.228 [2024-07-23 18:10:52.878707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.228 [2024-07-23 18:10:52.878718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.228 [2024-07-23 18:10:52.878727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.228 [2024-07-23 18:10:52.878816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.228 [2024-07-23 18:10:52.878916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.228 [2024-07-23 18:10:52.878913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:45.228 [2024-07-23 18:10:52.878890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.487 18:10:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 [2024-07-23 18:10:52.992069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 Malloc0 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.487 [2024-07-23 18:10:53.029648] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.487 { 00:22:45.487 "params": { 00:22:45.487 "name": "Nvme$subsystem", 00:22:45.487 "trtype": "$TEST_TRANSPORT", 00:22:45.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.487 "adrfam": "ipv4", 00:22:45.487 "trsvcid": "$NVMF_PORT", 00:22:45.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.487 "hdgst": ${hdgst:-false}, 00:22:45.487 "ddgst": ${ddgst:-false} 00:22:45.487 }, 00:22:45.487 "method": "bdev_nvme_attach_controller" 00:22:45.487 } 00:22:45.487 EOF 00:22:45.487 )") 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:45.487 18:10:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.487 "params": { 00:22:45.487 "name": "Nvme1", 00:22:45.487 "trtype": "tcp", 00:22:45.487 "traddr": "10.0.0.2", 00:22:45.487 "adrfam": "ipv4", 00:22:45.487 "trsvcid": "4420", 00:22:45.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.487 "hdgst": false, 00:22:45.487 "ddgst": false 00:22:45.487 }, 00:22:45.487 "method": "bdev_nvme_attach_controller" 00:22:45.487 }' 00:22:45.487 [2024-07-23 18:10:53.077821] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:22:45.487 [2024-07-23 18:10:53.077926] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2381197 ] 00:22:45.487 [2024-07-23 18:10:53.140880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.745 [2024-07-23 18:10:53.228313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.745 [2024-07-23 18:10:53.228362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.745 [2024-07-23 18:10:53.228366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.002 I/O targets: 00:22:46.002 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:46.002 00:22:46.002 00:22:46.002 CUnit - A unit testing framework for C - Version 2.1-3 00:22:46.002 http://cunit.sourceforge.net/ 00:22:46.003 00:22:46.003 00:22:46.003 Suite: bdevio tests on: Nvme1n1 00:22:46.003 Test: blockdev write read block ...passed 00:22:46.003 Test: blockdev write zeroes read block ...passed 00:22:46.003 Test: blockdev write zeroes read no split ...passed 00:22:46.003 Test: blockdev write zeroes read split ...passed 00:22:46.003 Test: blockdev write zeroes read split partial ...passed 00:22:46.003 Test: blockdev reset ...[2024-07-23 18:10:53.545125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.003 [2024-07-23 18:10:53.545255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cbb00 (9): Bad file descriptor 00:22:46.003 [2024-07-23 18:10:53.564196] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:46.003 passed 00:22:46.003 Test: blockdev write read 8 blocks ...passed 00:22:46.003 Test: blockdev write read size > 128k ...passed 00:22:46.003 Test: blockdev write read invalid size ...passed 00:22:46.003 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:46.003 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:46.003 Test: blockdev write read max offset ...passed 00:22:46.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:46.261 Test: blockdev writev readv 8 blocks ...passed 00:22:46.261 Test: blockdev writev readv 30 x 1block ...passed 00:22:46.261 Test: blockdev writev readv block ...passed 00:22:46.261 Test: blockdev writev readv size > 128k ...passed 00:22:46.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:46.261 Test: blockdev comparev and writev ...[2024-07-23 18:10:53.736558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.736594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.736618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.736634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.736975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.736999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.737020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.737036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.737394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.737418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.737439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.737454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.737785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.737807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.737828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.261 [2024-07-23 18:10:53.737843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.261 passed 00:22:46.261 Test: blockdev nvme passthru rw ...passed 00:22:46.261 Test: blockdev nvme passthru vendor specific ...[2024-07-23 18:10:53.819567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.261 [2024-07-23 18:10:53.819594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.819734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.261 [2024-07-23 18:10:53.819756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.819897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.261 [2024-07-23 18:10:53.819919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.261 [2024-07-23 18:10:53.820061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.261 [2024-07-23 18:10:53.820083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.261 passed 00:22:46.261 Test: blockdev nvme admin passthru ...passed 00:22:46.261 Test: blockdev copy ...passed 00:22:46.261 00:22:46.261 Run Summary: Type Total Ran Passed Failed Inactive 00:22:46.261 suites 1 1 n/a 0 0 00:22:46.261 tests 23 23 23 0 0 00:22:46.261 asserts 152 152 152 0 n/a 00:22:46.261 00:22:46.261 Elapsed time = 0.901 seconds 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.828 rmmod nvme_tcp 00:22:46.828 rmmod nvme_fabrics 00:22:46.828 rmmod nvme_keyring 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2381169 ']' 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2381169 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2381169 ']' 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2381169 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2381169 00:22:46.828 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:46.829 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:46.829 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2381169' 00:22:46.829 killing process with pid 2381169 00:22:46.829 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2381169 00:22:46.829 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2381169 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.087 18:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.622 00:22:49.622 real 0m6.258s 00:22:49.622 user 0m9.320s 00:22:49.622 sys 0m2.434s 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.622 ************************************ 00:22:49.622 END TEST nvmf_bdevio_no_huge 00:22:49.622 ************************************ 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.622 ************************************ 00:22:49.622 START TEST nvmf_tls 00:22:49.622 ************************************ 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.622 * Looking for test storage... 00:22:49.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:49.622 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.623 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.529 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:22:51.530 00:22:51.530 --- 10.0.0.2 ping statistics --- 00:22:51.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.530 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:51.530 00:22:51.530 --- 10.0.0.1 ping statistics --- 00:22:51.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.530 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2383261 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2383261 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2383261 ']' 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.530 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.530 [2024-07-23 18:10:58.960508] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:22:51.530 [2024-07-23 18:10:58.960592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.530 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.530 [2024-07-23 18:10:59.025313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.530 [2024-07-23 18:10:59.109238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.530 [2024-07-23 18:10:59.109290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.530 [2024-07-23 18:10:59.109326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.530 [2024-07-23 18:10:59.109338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.530 [2024-07-23 18:10:59.109349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.530 [2024-07-23 18:10:59.109374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:51.530 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:51.788 true 00:22:51.788 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.788 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:52.047 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:52.047 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:52.047 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.305 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.305 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:52.562 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:52.562 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:52.562 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:52.820 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.821 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:53.079 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:53.079 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:53.079 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.079 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:53.337 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:53.337 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:53.337 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:53.594 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.594 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:53.852 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:53.852 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:53.852 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:54.109 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.109 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Sp0FWSGvPT 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sviq3gc79E 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Sp0FWSGvPT 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sviq3gc79E 00:22:54.367 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:54.625 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:55.192 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Sp0FWSGvPT 00:22:55.192 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Sp0FWSGvPT 00:22:55.192 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.450 [2024-07-23 18:11:02.902921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.450 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.707 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.966 [2024-07-23 18:11:03.496512] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.966 [2024-07-23 18:11:03.496769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.967 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.225 malloc0 00:22:56.225 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.482 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Sp0FWSGvPT 00:22:56.739 [2024-07-23 18:11:04.346730] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:56.739 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Sp0FWSGvPT 00:22:56.997 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.994 Initializing NVMe Controllers 00:23:06.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.994 Initialization complete. Launching workers. 00:23:06.994 ======================================================== 00:23:06.994 Latency(us) 00:23:06.994 Device Information : IOPS MiB/s Average min max 00:23:06.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8701.87 33.99 7356.70 1130.69 9491.35 00:23:06.994 ======================================================== 00:23:06.994 Total : 8701.87 33.99 7356.70 1130.69 9491.35 00:23:06.994 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sp0FWSGvPT 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Sp0FWSGvPT' 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2385155 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2385155 /var/tmp/bdevperf.sock 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2385155 ']' 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.994 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.994 [2024-07-23 18:11:14.525663] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:06.994 [2024-07-23 18:11:14.525742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385155 ] 00:23:06.994 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.995 [2024-07-23 18:11:14.582712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.253 [2024-07-23 18:11:14.666492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.253 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.253 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.253 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Sp0FWSGvPT 00:23:07.511 [2024-07-23 18:11:15.013039] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.511 [2024-07-23 18:11:15.013175] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.511 TLSTESTn1 00:23:07.511 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.768 Running I/O for 10 seconds... 00:23:17.735 00:23:17.735 Latency(us) 00:23:17.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.735 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.735 Verification LBA range: start 0x0 length 0x2000 00:23:17.735 TLSTESTn1 : 10.02 3518.61 13.74 0.00 0.00 36315.22 6602.15 34369.99 00:23:17.735 =================================================================================================================== 00:23:17.735 Total : 3518.61 13.74 0.00 0.00 36315.22 6602.15 34369.99 00:23:17.735 0 00:23:17.735 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.735 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2385155 00:23:17.735 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2385155 ']' 00:23:17.735 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2385155 00:23:17.735 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2385155 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2385155' 00:23:17.736 killing process with pid 2385155 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2385155 00:23:17.736 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.736 00:23:17.736 Latency(us) 00:23:17.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.736 =================================================================================================================== 00:23:17.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.736 [2024-07-23 18:11:25.306666] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:17.736 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2385155 00:23:17.994 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sviq3gc79E 00:23:17.994 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:17.994 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sviq3gc79E 00:23:17.994 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:17.994 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sviq3gc79E 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sviq3gc79E' 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2386352 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2386352 /var/tmp/bdevperf.sock 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2386352 ']' 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.995 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.995 [2024-07-23 18:11:25.583163] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:17.995 [2024-07-23 18:11:25.583247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386352 ] 00:23:17.995 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.995 [2024-07-23 18:11:25.646218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.253 [2024-07-23 18:11:25.730732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.253 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.253 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:18.253 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sviq3gc79E 00:23:18.511 [2024-07-23 18:11:26.089217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.511 [2024-07-23 18:11:26.089367] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:18.511 [2024-07-23 18:11:26.099950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.511 [2024-07-23 18:11:26.100210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9bb0 (107): Transport endpoint is not connected 00:23:18.511 [2024-07-23 18:11:26.101200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9bb0 (9): Bad file descriptor 00:23:18.511 [2024-07-23 18:11:26.102200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.511 [2024-07-23 18:11:26.102218] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.511 [2024-07-23 18:11:26.102250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.511 request: 00:23:18.511 { 00:23:18.511 "name": "TLSTEST", 00:23:18.511 "trtype": "tcp", 00:23:18.511 "traddr": "10.0.0.2", 00:23:18.511 "adrfam": "ipv4", 00:23:18.511 "trsvcid": "4420", 00:23:18.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.511 "prchk_reftag": false, 00:23:18.511 "prchk_guard": false, 00:23:18.511 "hdgst": false, 00:23:18.511 "ddgst": false, 00:23:18.511 "psk": "/tmp/tmp.sviq3gc79E", 00:23:18.511 "method": "bdev_nvme_attach_controller", 00:23:18.511 "req_id": 1 00:23:18.511 } 00:23:18.511 Got JSON-RPC error response 00:23:18.511 response: 00:23:18.511 { 00:23:18.511 "code": -5, 00:23:18.511 "message": "Input/output error" 00:23:18.511 } 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2386352 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2386352 ']' 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2386352 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386352 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386352' 00:23:18.511 killing process with pid 2386352 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2386352 00:23:18.511 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.511 00:23:18.511 Latency(us) 00:23:18.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.511 =================================================================================================================== 00:23:18.511 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.511 [2024-07-23 18:11:26.145029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.511 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2386352 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sp0FWSGvPT 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sp0FWSGvPT 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sp0FWSGvPT 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Sp0FWSGvPT' 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2386482 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2386482 /var/tmp/bdevperf.sock 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2386482 ']' 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.770 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.770 [2024-07-23 18:11:26.378568] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:18.770 [2024-07-23 18:11:26.378662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386482 ] 00:23:18.770 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.028 [2024-07-23 18:11:26.439702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.028 [2024-07-23 18:11:26.527537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.028 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.028 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:19.028 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Sp0FWSGvPT 00:23:19.286 [2024-07-23 18:11:26.863459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.286 [2024-07-23 18:11:26.863582] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:19.286 [2024-07-23 18:11:26.871780] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.286 [2024-07-23 18:11:26.871823] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.286 [2024-07-23 18:11:26.871860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.286 [2024-07-23 18:11:26.872501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8efbb0 (107): Transport endpoint is not connected 00:23:19.286 [2024-07-23 18:11:26.873492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8efbb0 (9): Bad file descriptor 00:23:19.286 [2024-07-23 18:11:26.874491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.286 [2024-07-23 18:11:26.874511] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.286 [2024-07-23 18:11:26.874543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.286 request: 00:23:19.286 { 00:23:19.286 "name": "TLSTEST", 00:23:19.286 "trtype": "tcp", 00:23:19.286 "traddr": "10.0.0.2", 00:23:19.286 "adrfam": "ipv4", 00:23:19.286 "trsvcid": "4420", 00:23:19.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.286 "prchk_reftag": false, 00:23:19.286 "prchk_guard": false, 00:23:19.286 "hdgst": false, 00:23:19.286 "ddgst": false, 00:23:19.286 "psk": "/tmp/tmp.Sp0FWSGvPT", 00:23:19.286 "method": "bdev_nvme_attach_controller", 00:23:19.286 "req_id": 1 00:23:19.286 } 00:23:19.286 Got JSON-RPC error response 00:23:19.286 response: 00:23:19.286 { 00:23:19.286 "code": -5, 00:23:19.286 "message": "Input/output error" 00:23:19.286 } 00:23:19.286 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2386482 00:23:19.286 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2386482 ']' 00:23:19.286 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2386482 00:23:19.286 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:19.286 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386482 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386482' 00:23:19.287 killing process with pid 2386482 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2386482 00:23:19.287 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.287 00:23:19.287 Latency(us) 00:23:19.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.287 =================================================================================================================== 00:23:19.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.287 [2024-07-23 18:11:26.922691] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:19.287 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2386482 00:23:19.544 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sp0FWSGvPT 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sp0FWSGvPT 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sp0FWSGvPT 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Sp0FWSGvPT' 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2386619 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2386619 /var/tmp/bdevperf.sock 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2386619 ']' 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.545 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.545 [2024-07-23 18:11:27.189030] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:19.545 [2024-07-23 18:11:27.189106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386619 ] 00:23:19.804 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.804 [2024-07-23 18:11:27.247671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.804 [2024-07-23 18:11:27.329031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.804 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.804 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:19.804 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Sp0FWSGvPT 00:23:20.062 [2024-07-23 18:11:27.672468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.062 [2024-07-23 18:11:27.672600] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.062 [2024-07-23 18:11:27.683909] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.062 [2024-07-23 18:11:27.683939] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.062 [2024-07-23 18:11:27.683992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.062 [2024-07-23 18:11:27.684635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dcbb0 (107): Transport endpoint is not connected 00:23:20.062 [2024-07-23 18:11:27.685630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dcbb0 (9): Bad file descriptor 00:23:20.062 [2024-07-23 18:11:27.686629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:20.062 [2024-07-23 18:11:27.686649] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.062 [2024-07-23 18:11:27.686666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:20.062 request: 00:23:20.062 { 00:23:20.062 "name": "TLSTEST", 00:23:20.062 "trtype": "tcp", 00:23:20.062 "traddr": "10.0.0.2", 00:23:20.062 "adrfam": "ipv4", 00:23:20.062 "trsvcid": "4420", 00:23:20.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.062 "prchk_reftag": false, 00:23:20.062 "prchk_guard": false, 00:23:20.062 "hdgst": false, 00:23:20.062 "ddgst": false, 00:23:20.062 "psk": "/tmp/tmp.Sp0FWSGvPT", 00:23:20.062 "method": "bdev_nvme_attach_controller", 00:23:20.062 "req_id": 1 00:23:20.062 } 00:23:20.062 Got JSON-RPC error response 00:23:20.062 response: 00:23:20.062 { 00:23:20.062 "code": -5, 00:23:20.062 "message": "Input/output error" 00:23:20.062 } 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2386619 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2386619 ']' 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2386619 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.062 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386619 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386619' 00:23:20.321 killing process with pid 2386619 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2386619 00:23:20.321 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.321 00:23:20.321 Latency(us) 00:23:20.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.321 =================================================================================================================== 00:23:20.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.321 [2024-07-23 18:11:27.734616] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2386619 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2386758 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2386758 /var/tmp/bdevperf.sock 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2386758 ']' 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.321 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.321 [2024-07-23 18:11:27.975653] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:20.321 [2024-07-23 18:11:27.975740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386758 ] 00:23:20.579 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.579 [2024-07-23 18:11:28.034032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.579 [2024-07-23 18:11:28.116655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.579 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.579 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.579 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:20.837 [2024-07-23 18:11:28.435913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.837 [2024-07-23 18:11:28.437945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2559160 (9): Bad file descriptor 00:23:20.837 [2024-07-23 18:11:28.438944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.837 [2024-07-23 18:11:28.438963] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.837 [2024-07-23 18:11:28.438995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.837 request: 00:23:20.837 { 00:23:20.837 "name": "TLSTEST", 00:23:20.837 "trtype": "tcp", 00:23:20.837 "traddr": "10.0.0.2", 00:23:20.837 "adrfam": "ipv4", 00:23:20.837 "trsvcid": "4420", 00:23:20.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.837 "prchk_reftag": false, 00:23:20.837 "prchk_guard": false, 00:23:20.837 "hdgst": false, 00:23:20.837 "ddgst": false, 00:23:20.837 "method": "bdev_nvme_attach_controller", 00:23:20.837 "req_id": 1 00:23:20.837 } 00:23:20.837 Got JSON-RPC error response 00:23:20.837 response: 00:23:20.837 { 00:23:20.837 "code": -5, 00:23:20.837 "message": "Input/output error" 00:23:20.837 } 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2386758 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2386758 ']' 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2386758 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386758 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386758' 00:23:20.837 killing process with pid 2386758 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2386758 00:23:20.837 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.837 00:23:20.837 Latency(us) 00:23:20.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.837 =================================================================================================================== 00:23:20.837 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.837 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2386758 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2383261 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2383261 ']' 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2383261 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383261 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383261' 00:23:21.095 killing process with pid 2383261 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2383261 00:23:21.095 [2024-07-23 18:11:28.724897] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:21.095 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2383261 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.354 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.SaKNZve6D3 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.SaKNZve6D3 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2386861 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2386861 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2386861 ']' 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.354 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.613 [2024-07-23 18:11:29.056664] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:21.613 [2024-07-23 18:11:29.056750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.613 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.613 [2024-07-23 18:11:29.119872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.613 [2024-07-23 18:11:29.198274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.613 [2024-07-23 18:11:29.198340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.613 [2024-07-23 18:11:29.198369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.613 [2024-07-23 18:11:29.198381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.613 [2024-07-23 18:11:29.198391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.613 [2024-07-23 18:11:29.198417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SaKNZve6D3 00:23:21.871 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.129 [2024-07-23 18:11:29.560717] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.129 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.387 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.387 [2024-07-23 18:11:30.046027] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.387 [2024-07-23 18:11:30.046286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.645 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.645 malloc0 00:23:22.903 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:23.160 [2024-07-23 18:11:30.783505] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaKNZve6D3 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SaKNZve6D3' 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2387069 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2387069 /var/tmp/bdevperf.sock 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2387069 ']' 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.160 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.419 [2024-07-23 18:11:30.848512] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:23.419 [2024-07-23 18:11:30.848582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387069 ] 00:23:23.419 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.419 [2024-07-23 18:11:30.905594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.419 [2024-07-23 18:11:30.988899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.677 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.677 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:23.677 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:23.936 [2024-07-23 18:11:31.340775] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.936 [2024-07-23 18:11:31.340896] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:23.936 TLSTESTn1 00:23:23.936 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.936 Running I/O for 10 seconds... 00:23:36.125 00:23:36.125 Latency(us) 00:23:36.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.125 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:36.125 Verification LBA range: start 0x0 length 0x2000 00:23:36.125 TLSTESTn1 : 10.02 3442.89 13.45 0.00 0.00 37113.49 6747.78 39224.51 00:23:36.125 =================================================================================================================== 00:23:36.125 Total : 3442.89 13.45 0.00 0.00 37113.49 6747.78 39224.51 00:23:36.125 0 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2387069 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2387069 ']' 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2387069 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2387069 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2387069' 00:23:36.125 killing process with pid 2387069 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2387069 00:23:36.125 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.125 00:23:36.125 Latency(us) 00:23:36.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.125 =================================================================================================================== 00:23:36.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.125 [2024-07-23 18:11:41.624666] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2387069 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.SaKNZve6D3 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaKNZve6D3 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaKNZve6D3 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaKNZve6D3 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SaKNZve6D3' 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2388384 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2388384 /var/tmp/bdevperf.sock 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2388384 ']' 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.125 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.125 [2024-07-23 18:11:41.902456] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:36.125 [2024-07-23 18:11:41.902543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388384 ] 00:23:36.125 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.125 [2024-07-23 18:11:41.961833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.125 [2024-07-23 18:11:42.042353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:36.125 [2024-07-23 18:11:42.385308] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.125 [2024-07-23 18:11:42.385404] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:36.125 [2024-07-23 18:11:42.385424] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.SaKNZve6D3 00:23:36.125 request: 00:23:36.125 { 00:23:36.125 "name": "TLSTEST", 00:23:36.125 "trtype": "tcp", 00:23:36.125 "traddr": "10.0.0.2", 00:23:36.125 "adrfam": "ipv4", 00:23:36.125 "trsvcid": "4420", 00:23:36.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.125 "prchk_reftag": false, 00:23:36.125 "prchk_guard": false, 00:23:36.125 "hdgst": false, 00:23:36.125 "ddgst": false, 00:23:36.125 "psk": "/tmp/tmp.SaKNZve6D3", 00:23:36.125 "method": "bdev_nvme_attach_controller", 00:23:36.125 "req_id": 1 00:23:36.125 } 00:23:36.125 Got JSON-RPC error response 00:23:36.125 response: 00:23:36.125 { 00:23:36.125 "code": -1, 00:23:36.125 "message": "Operation not permitted" 00:23:36.125 } 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2388384 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2388384 ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2388384 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388384 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388384' 00:23:36.125 killing process with pid 2388384 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2388384 00:23:36.125 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.125 00:23:36.125 Latency(us) 00:23:36.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.125 =================================================================================================================== 00:23:36.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2388384 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2386861 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2386861 ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2386861 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386861 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386861' 00:23:36.125 killing process with pid 2386861 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2386861 00:23:36.125 [2024-07-23 18:11:42.646993] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2386861 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.125 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2388528 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2388528 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2388528 ']' 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.126 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.126 [2024-07-23 18:11:42.911914] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:36.126 [2024-07-23 18:11:42.911994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.126 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.126 [2024-07-23 18:11:42.975802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.126 [2024-07-23 18:11:43.064078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.126 [2024-07-23 18:11:43.064143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.126 [2024-07-23 18:11:43.064156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.126 [2024-07-23 18:11:43.064168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.126 [2024-07-23 18:11:43.064177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.126 [2024-07-23 18:11:43.064202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SaKNZve6D3 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.126 [2024-07-23 18:11:43.424412] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.126 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.383 [2024-07-23 18:11:43.921716] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.383 [2024-07-23 18:11:43.921952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.383 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.641 malloc0 00:23:36.641 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.899 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:37.157 [2024-07-23 18:11:44.646013] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:37.157 [2024-07-23 18:11:44.646056] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:37.157 [2024-07-23 18:11:44.646102] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:37.157 request: 00:23:37.157 { 00:23:37.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.157 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.157 "psk": "/tmp/tmp.SaKNZve6D3", 00:23:37.157 "method": "nvmf_subsystem_add_host", 00:23:37.157 "req_id": 1 00:23:37.157 } 00:23:37.157 Got JSON-RPC error response 00:23:37.157 response: 00:23:37.157 { 00:23:37.157 "code": -32603, 00:23:37.157 "message": "Internal error" 00:23:37.157 } 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2388528 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2388528 ']' 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2388528 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388528 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388528' 00:23:37.157 killing process with pid 2388528 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2388528 00:23:37.157 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2388528 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.SaKNZve6D3 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2388754 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2388754 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2388754 ']' 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.415 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.415 [2024-07-23 18:11:44.982932] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:37.415 [2024-07-23 18:11:44.983030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.415 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.415 [2024-07-23 18:11:45.045272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.673 [2024-07-23 18:11:45.125348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.673 [2024-07-23 18:11:45.125426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.673 [2024-07-23 18:11:45.125439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.673 [2024-07-23 18:11:45.125464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.673 [2024-07-23 18:11:45.125473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.673 [2024-07-23 18:11:45.125499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SaKNZve6D3 00:23:37.673 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.931 [2024-07-23 18:11:45.539630] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.931 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:38.550 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.550 [2024-07-23 18:11:46.081113] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.550 [2024-07-23 18:11:46.081402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.550 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.808 malloc0 00:23:38.808 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:39.065 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:39.324 [2024-07-23 18:11:46.809021] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2388987 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2388987 /var/tmp/bdevperf.sock 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2388987 ']' 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.324 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.324 [2024-07-23 18:11:46.870213] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:39.324 [2024-07-23 18:11:46.870284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388987 ] 00:23:39.324 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.324 [2024-07-23 18:11:46.927250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.582 [2024-07-23 18:11:47.010417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.582 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.582 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:39.582 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:39.840 [2024-07-23 18:11:47.357496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.840 [2024-07-23 18:11:47.357640] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.840 TLSTESTn1 00:23:39.840 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:40.405 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:40.405 "subsystems": [ 00:23:40.405 { 00:23:40.405 "subsystem": "keyring", 00:23:40.405 "config": [] 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "subsystem": "iobuf", 00:23:40.405 "config": [ 00:23:40.405 { 00:23:40.405 "method": "iobuf_set_options", 00:23:40.405 "params": { 00:23:40.405 "small_pool_count": 8192, 00:23:40.405 "large_pool_count": 1024, 00:23:40.405 "small_bufsize": 8192, 00:23:40.405 "large_bufsize": 135168 00:23:40.405 } 00:23:40.405 } 00:23:40.405 ] 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "subsystem": "sock", 00:23:40.405 "config": [ 00:23:40.405 { 00:23:40.405 "method": "sock_set_default_impl", 00:23:40.405 "params": { 00:23:40.405 "impl_name": "posix" 00:23:40.405 } 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "method": "sock_impl_set_options", 00:23:40.405 "params": { 00:23:40.405 "impl_name": "ssl", 00:23:40.405 "recv_buf_size": 4096, 00:23:40.405 "send_buf_size": 4096, 00:23:40.405 "enable_recv_pipe": true, 00:23:40.405 "enable_quickack": false, 00:23:40.405 "enable_placement_id": 0, 00:23:40.405 "enable_zerocopy_send_server": true, 00:23:40.405 "enable_zerocopy_send_client": false, 00:23:40.405 "zerocopy_threshold": 0, 00:23:40.405 "tls_version": 0, 00:23:40.405 "enable_ktls": false 00:23:40.405 } 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "method": "sock_impl_set_options", 00:23:40.405 "params": { 00:23:40.405 "impl_name": "posix", 00:23:40.405 "recv_buf_size": 2097152, 00:23:40.405 "send_buf_size": 2097152, 00:23:40.405 "enable_recv_pipe": true, 00:23:40.405 "enable_quickack": false, 00:23:40.405 "enable_placement_id": 0, 00:23:40.405 "enable_zerocopy_send_server": true, 00:23:40.405 "enable_zerocopy_send_client": false, 00:23:40.405 "zerocopy_threshold": 0, 00:23:40.405 "tls_version": 0, 00:23:40.405 "enable_ktls": false 00:23:40.405 } 00:23:40.405 } 00:23:40.405 ] 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "subsystem": "vmd", 00:23:40.405 "config": [] 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "subsystem": "accel", 00:23:40.405 "config": [ 00:23:40.405 { 00:23:40.405 "method": "accel_set_options", 00:23:40.405 "params": { 00:23:40.405 "small_cache_size": 128, 00:23:40.405 "large_cache_size": 16, 00:23:40.405 "task_count": 2048, 00:23:40.405 "sequence_count": 2048, 00:23:40.405 "buf_count": 2048 00:23:40.405 } 00:23:40.405 } 00:23:40.405 ] 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "subsystem": "bdev", 00:23:40.405 "config": [ 00:23:40.405 { 00:23:40.405 "method": "bdev_set_options", 00:23:40.405 "params": { 00:23:40.405 "bdev_io_pool_size": 65535, 00:23:40.405 "bdev_io_cache_size": 256, 00:23:40.405 "bdev_auto_examine": true, 00:23:40.405 "iobuf_small_cache_size": 128, 00:23:40.405 "iobuf_large_cache_size": 16 00:23:40.405 } 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "method": "bdev_raid_set_options", 00:23:40.405 "params": { 00:23:40.405 "process_window_size_kb": 1024, 00:23:40.405 "process_max_bandwidth_mb_sec": 0 00:23:40.405 } 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "method": "bdev_iscsi_set_options", 00:23:40.405 "params": { 00:23:40.405 "timeout_sec": 30 00:23:40.405 } 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "method": "bdev_nvme_set_options", 00:23:40.405 "params": { 00:23:40.405 "action_on_timeout": "none", 00:23:40.405 "timeout_us": 0, 00:23:40.406 "timeout_admin_us": 0, 00:23:40.406 "keep_alive_timeout_ms": 10000, 00:23:40.406 "arbitration_burst": 0, 00:23:40.406 "low_priority_weight": 0, 00:23:40.406 "medium_priority_weight": 0, 00:23:40.406 "high_priority_weight": 0, 00:23:40.406 "nvme_adminq_poll_period_us": 10000, 00:23:40.406 "nvme_ioq_poll_period_us": 0, 00:23:40.406 "io_queue_requests": 0, 00:23:40.406 "delay_cmd_submit": true, 00:23:40.406 "transport_retry_count": 4, 00:23:40.406 "bdev_retry_count": 3, 00:23:40.406 "transport_ack_timeout": 0, 00:23:40.406 "ctrlr_loss_timeout_sec": 0, 00:23:40.406 "reconnect_delay_sec": 0, 00:23:40.406 "fast_io_fail_timeout_sec": 0, 00:23:40.406 "disable_auto_failback": false, 00:23:40.406 "generate_uuids": false, 00:23:40.406 "transport_tos": 0, 00:23:40.406 "nvme_error_stat": false, 00:23:40.406 "rdma_srq_size": 0, 00:23:40.406 "io_path_stat": false, 00:23:40.406 "allow_accel_sequence": false, 00:23:40.406 "rdma_max_cq_size": 0, 00:23:40.406 "rdma_cm_event_timeout_ms": 0, 00:23:40.406 "dhchap_digests": [ 00:23:40.406 "sha256", 00:23:40.406 "sha384", 00:23:40.406 "sha512" 00:23:40.406 ], 00:23:40.406 "dhchap_dhgroups": [ 00:23:40.406 "null", 00:23:40.406 "ffdhe2048", 00:23:40.406 "ffdhe3072", 00:23:40.406 "ffdhe4096", 00:23:40.406 "ffdhe6144", 00:23:40.406 "ffdhe8192" 00:23:40.406 ] 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "bdev_nvme_set_hotplug", 00:23:40.406 "params": { 00:23:40.406 "period_us": 100000, 00:23:40.406 "enable": false 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "bdev_malloc_create", 00:23:40.406 "params": { 00:23:40.406 "name": "malloc0", 00:23:40.406 "num_blocks": 8192, 00:23:40.406 "block_size": 4096, 00:23:40.406 "physical_block_size": 4096, 00:23:40.406 "uuid": "dd89952c-c38a-4ab1-a287-3982f98c5829", 00:23:40.406 "optimal_io_boundary": 0, 00:23:40.406 "md_size": 0, 00:23:40.406 "dif_type": 0, 00:23:40.406 "dif_is_head_of_md": false, 00:23:40.406 "dif_pi_format": 0 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "bdev_wait_for_examine" 00:23:40.406 } 00:23:40.406 ] 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "subsystem": "nbd", 00:23:40.406 "config": [] 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "subsystem": "scheduler", 00:23:40.406 "config": [ 00:23:40.406 { 00:23:40.406 "method": "framework_set_scheduler", 00:23:40.406 "params": { 00:23:40.406 "name": "static" 00:23:40.406 } 00:23:40.406 } 00:23:40.406 ] 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "subsystem": "nvmf", 00:23:40.406 "config": [ 00:23:40.406 { 00:23:40.406 "method": "nvmf_set_config", 00:23:40.406 "params": { 00:23:40.406 "discovery_filter": "match_any", 00:23:40.406 "admin_cmd_passthru": { 00:23:40.406 "identify_ctrlr": false 00:23:40.406 } 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_set_max_subsystems", 00:23:40.406 "params": { 00:23:40.406 "max_subsystems": 1024 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_set_crdt", 00:23:40.406 "params": { 00:23:40.406 "crdt1": 0, 00:23:40.406 "crdt2": 0, 00:23:40.406 "crdt3": 0 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_create_transport", 00:23:40.406 "params": { 00:23:40.406 "trtype": "TCP", 00:23:40.406 "max_queue_depth": 128, 00:23:40.406 "max_io_qpairs_per_ctrlr": 127, 00:23:40.406 "in_capsule_data_size": 4096, 00:23:40.406 "max_io_size": 131072, 00:23:40.406 "io_unit_size": 131072, 00:23:40.406 "max_aq_depth": 128, 00:23:40.406 "num_shared_buffers": 511, 00:23:40.406 "buf_cache_size": 4294967295, 00:23:40.406 "dif_insert_or_strip": false, 00:23:40.406 "zcopy": false, 00:23:40.406 "c2h_success": false, 00:23:40.406 "sock_priority": 0, 00:23:40.406 "abort_timeout_sec": 1, 00:23:40.406 "ack_timeout": 0, 00:23:40.406 "data_wr_pool_size": 0 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_create_subsystem", 00:23:40.406 "params": { 00:23:40.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.406 "allow_any_host": false, 00:23:40.406 "serial_number": "SPDK00000000000001", 00:23:40.406 "model_number": "SPDK bdev Controller", 00:23:40.406 "max_namespaces": 10, 00:23:40.406 "min_cntlid": 1, 00:23:40.406 "max_cntlid": 65519, 00:23:40.406 "ana_reporting": false 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_subsystem_add_host", 00:23:40.406 "params": { 00:23:40.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.406 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.406 "psk": "/tmp/tmp.SaKNZve6D3" 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_subsystem_add_ns", 00:23:40.406 "params": { 00:23:40.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.406 "namespace": { 00:23:40.406 "nsid": 1, 00:23:40.406 "bdev_name": "malloc0", 00:23:40.406 "nguid": "DD89952CC38A4AB1A2873982F98C5829", 00:23:40.406 "uuid": "dd89952c-c38a-4ab1-a287-3982f98c5829", 00:23:40.406 "no_auto_visible": false 00:23:40.406 } 00:23:40.406 } 00:23:40.406 }, 00:23:40.406 { 00:23:40.406 "method": "nvmf_subsystem_add_listener", 00:23:40.406 "params": { 00:23:40.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.406 "listen_address": { 00:23:40.406 "trtype": "TCP", 00:23:40.406 "adrfam": "IPv4", 00:23:40.406 "traddr": "10.0.0.2", 00:23:40.406 "trsvcid": "4420" 00:23:40.406 }, 00:23:40.406 "secure_channel": true 00:23:40.406 } 00:23:40.406 } 00:23:40.406 ] 00:23:40.406 } 00:23:40.406 ] 00:23:40.406 }' 00:23:40.406 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:40.665 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:40.665 "subsystems": [ 00:23:40.665 { 00:23:40.665 "subsystem": "keyring", 00:23:40.665 "config": [] 00:23:40.665 }, 00:23:40.665 { 00:23:40.665 "subsystem": "iobuf", 00:23:40.665 "config": [ 00:23:40.665 { 00:23:40.665 "method": "iobuf_set_options", 00:23:40.665 "params": { 00:23:40.665 "small_pool_count": 8192, 00:23:40.665 "large_pool_count": 1024, 00:23:40.665 "small_bufsize": 8192, 00:23:40.665 "large_bufsize": 135168 00:23:40.665 } 00:23:40.665 } 00:23:40.665 ] 00:23:40.665 }, 00:23:40.665 { 00:23:40.665 "subsystem": "sock", 00:23:40.665 "config": [ 00:23:40.665 { 00:23:40.665 "method": "sock_set_default_impl", 00:23:40.665 "params": { 00:23:40.665 "impl_name": "posix" 00:23:40.665 } 00:23:40.665 }, 00:23:40.665 { 00:23:40.665 "method": "sock_impl_set_options", 00:23:40.665 "params": { 00:23:40.665 "impl_name": "ssl", 00:23:40.665 "recv_buf_size": 4096, 00:23:40.665 "send_buf_size": 4096, 00:23:40.665 "enable_recv_pipe": true, 00:23:40.665 "enable_quickack": false, 00:23:40.665 "enable_placement_id": 0, 00:23:40.665 "enable_zerocopy_send_server": true, 00:23:40.665 "enable_zerocopy_send_client": false, 00:23:40.665 "zerocopy_threshold": 0, 00:23:40.665 "tls_version": 0, 00:23:40.665 "enable_ktls": false 00:23:40.665 } 00:23:40.665 }, 00:23:40.665 { 00:23:40.665 "method": "sock_impl_set_options", 00:23:40.665 "params": { 00:23:40.665 "impl_name": "posix", 00:23:40.665 "recv_buf_size": 2097152, 00:23:40.665 "send_buf_size": 2097152, 00:23:40.666 "enable_recv_pipe": true, 00:23:40.666 "enable_quickack": false, 00:23:40.666 "enable_placement_id": 0, 00:23:40.666 "enable_zerocopy_send_server": true, 00:23:40.666 "enable_zerocopy_send_client": false, 00:23:40.666 "zerocopy_threshold": 0, 00:23:40.666 "tls_version": 0, 00:23:40.666 "enable_ktls": false 00:23:40.666 } 00:23:40.666 } 00:23:40.666 ] 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "subsystem": "vmd", 00:23:40.666 "config": [] 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "subsystem": "accel", 00:23:40.666 "config": [ 00:23:40.666 { 00:23:40.666 "method": "accel_set_options", 00:23:40.666 "params": { 00:23:40.666 "small_cache_size": 128, 00:23:40.666 "large_cache_size": 16, 00:23:40.666 "task_count": 2048, 00:23:40.666 "sequence_count": 2048, 00:23:40.666 "buf_count": 2048 00:23:40.666 } 00:23:40.666 } 00:23:40.666 ] 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "subsystem": "bdev", 00:23:40.666 "config": [ 00:23:40.666 { 00:23:40.666 "method": "bdev_set_options", 00:23:40.666 "params": { 00:23:40.666 "bdev_io_pool_size": 65535, 00:23:40.666 "bdev_io_cache_size": 256, 00:23:40.666 "bdev_auto_examine": true, 00:23:40.666 "iobuf_small_cache_size": 128, 00:23:40.666 "iobuf_large_cache_size": 16 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_raid_set_options", 00:23:40.666 "params": { 00:23:40.666 "process_window_size_kb": 1024, 00:23:40.666 "process_max_bandwidth_mb_sec": 0 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_iscsi_set_options", 00:23:40.666 "params": { 00:23:40.666 "timeout_sec": 30 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_nvme_set_options", 00:23:40.666 "params": { 00:23:40.666 "action_on_timeout": "none", 00:23:40.666 "timeout_us": 0, 00:23:40.666 "timeout_admin_us": 0, 00:23:40.666 "keep_alive_timeout_ms": 10000, 00:23:40.666 "arbitration_burst": 0, 00:23:40.666 "low_priority_weight": 0, 00:23:40.666 "medium_priority_weight": 0, 00:23:40.666 "high_priority_weight": 0, 00:23:40.666 "nvme_adminq_poll_period_us": 10000, 00:23:40.666 "nvme_ioq_poll_period_us": 0, 00:23:40.666 "io_queue_requests": 512, 00:23:40.666 "delay_cmd_submit": true, 00:23:40.666 "transport_retry_count": 4, 00:23:40.666 "bdev_retry_count": 3, 00:23:40.666 "transport_ack_timeout": 0, 00:23:40.666 "ctrlr_loss_timeout_sec": 0, 00:23:40.666 "reconnect_delay_sec": 0, 00:23:40.666 "fast_io_fail_timeout_sec": 0, 00:23:40.666 "disable_auto_failback": false, 00:23:40.666 "generate_uuids": false, 00:23:40.666 "transport_tos": 0, 00:23:40.666 "nvme_error_stat": false, 00:23:40.666 "rdma_srq_size": 0, 00:23:40.666 "io_path_stat": false, 00:23:40.666 "allow_accel_sequence": false, 00:23:40.666 "rdma_max_cq_size": 0, 00:23:40.666 "rdma_cm_event_timeout_ms": 0, 00:23:40.666 "dhchap_digests": [ 00:23:40.666 "sha256", 00:23:40.666 "sha384", 00:23:40.666 "sha512" 00:23:40.666 ], 00:23:40.666 "dhchap_dhgroups": [ 00:23:40.666 "null", 00:23:40.666 "ffdhe2048", 00:23:40.666 "ffdhe3072", 00:23:40.666 "ffdhe4096", 00:23:40.666 "ffdhe6144", 00:23:40.666 "ffdhe8192" 00:23:40.666 ] 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_nvme_attach_controller", 00:23:40.666 "params": { 00:23:40.666 "name": "TLSTEST", 00:23:40.666 "trtype": "TCP", 00:23:40.666 "adrfam": "IPv4", 00:23:40.666 "traddr": "10.0.0.2", 00:23:40.666 "trsvcid": "4420", 00:23:40.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.666 "prchk_reftag": false, 00:23:40.666 "prchk_guard": false, 00:23:40.666 "ctrlr_loss_timeout_sec": 0, 00:23:40.666 "reconnect_delay_sec": 0, 00:23:40.666 "fast_io_fail_timeout_sec": 0, 00:23:40.666 "psk": "/tmp/tmp.SaKNZve6D3", 00:23:40.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.666 "hdgst": false, 00:23:40.666 "ddgst": false 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_nvme_set_hotplug", 00:23:40.666 "params": { 00:23:40.666 "period_us": 100000, 00:23:40.666 "enable": false 00:23:40.666 } 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "method": "bdev_wait_for_examine" 00:23:40.666 } 00:23:40.666 ] 00:23:40.666 }, 00:23:40.666 { 00:23:40.666 "subsystem": "nbd", 00:23:40.666 "config": [] 00:23:40.666 } 00:23:40.666 ] 00:23:40.666 }' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2388987 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2388987 ']' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2388987 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388987 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388987' 00:23:40.666 killing process with pid 2388987 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2388987 00:23:40.666 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.666 00:23:40.666 Latency(us) 00:23:40.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.666 =================================================================================================================== 00:23:40.666 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.666 [2024-07-23 18:11:48.115871] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2388987 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2388754 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2388754 ']' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2388754 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.666 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388754 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388754' 00:23:40.926 killing process with pid 2388754 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2388754 00:23:40.926 [2024-07-23 18:11:48.345109] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2388754 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.926 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:40.926 "subsystems": [ 00:23:40.926 { 00:23:40.926 "subsystem": "keyring", 00:23:40.926 "config": [] 00:23:40.926 }, 00:23:40.926 { 00:23:40.926 "subsystem": "iobuf", 00:23:40.926 "config": [ 00:23:40.926 { 00:23:40.926 "method": "iobuf_set_options", 00:23:40.926 "params": { 00:23:40.926 "small_pool_count": 8192, 00:23:40.926 "large_pool_count": 1024, 00:23:40.926 "small_bufsize": 8192, 00:23:40.926 "large_bufsize": 135168 00:23:40.926 } 00:23:40.926 } 00:23:40.926 ] 00:23:40.926 }, 00:23:40.926 { 00:23:40.926 "subsystem": "sock", 00:23:40.926 "config": [ 00:23:40.926 { 00:23:40.926 "method": "sock_set_default_impl", 00:23:40.926 "params": { 00:23:40.926 "impl_name": "posix" 00:23:40.926 } 00:23:40.926 }, 00:23:40.926 { 00:23:40.926 "method": "sock_impl_set_options", 00:23:40.926 "params": { 00:23:40.926 "impl_name": "ssl", 00:23:40.926 "recv_buf_size": 4096, 00:23:40.926 "send_buf_size": 4096, 00:23:40.926 "enable_recv_pipe": true, 00:23:40.926 "enable_quickack": false, 00:23:40.926 "enable_placement_id": 0, 00:23:40.926 "enable_zerocopy_send_server": true, 00:23:40.926 "enable_zerocopy_send_client": false, 00:23:40.926 "zerocopy_threshold": 0, 00:23:40.926 "tls_version": 0, 00:23:40.926 "enable_ktls": false 00:23:40.926 } 00:23:40.926 }, 00:23:40.926 { 00:23:40.926 "method": "sock_impl_set_options", 00:23:40.926 "params": { 00:23:40.926 "impl_name": "posix", 00:23:40.926 "recv_buf_size": 2097152, 00:23:40.926 "send_buf_size": 2097152, 00:23:40.926 "enable_recv_pipe": true, 00:23:40.926 "enable_quickack": false, 00:23:40.926 "enable_placement_id": 0, 00:23:40.926 "enable_zerocopy_send_server": true, 00:23:40.927 "enable_zerocopy_send_client": false, 00:23:40.927 "zerocopy_threshold": 0, 00:23:40.927 "tls_version": 0, 00:23:40.927 "enable_ktls": false 00:23:40.927 } 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "vmd", 00:23:40.927 "config": [] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "accel", 00:23:40.927 "config": [ 00:23:40.927 { 00:23:40.927 "method": "accel_set_options", 00:23:40.927 "params": { 00:23:40.927 "small_cache_size": 128, 00:23:40.927 "large_cache_size": 16, 00:23:40.927 "task_count": 2048, 00:23:40.927 "sequence_count": 2048, 00:23:40.927 "buf_count": 2048 00:23:40.927 } 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "bdev", 00:23:40.927 "config": [ 00:23:40.927 { 00:23:40.927 "method": "bdev_set_options", 00:23:40.927 "params": { 00:23:40.927 "bdev_io_pool_size": 65535, 00:23:40.927 "bdev_io_cache_size": 256, 00:23:40.927 "bdev_auto_examine": true, 00:23:40.927 "iobuf_small_cache_size": 128, 00:23:40.927 "iobuf_large_cache_size": 16 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_raid_set_options", 00:23:40.927 "params": { 00:23:40.927 "process_window_size_kb": 1024, 00:23:40.927 "process_max_bandwidth_mb_sec": 0 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_iscsi_set_options", 00:23:40.927 "params": { 00:23:40.927 "timeout_sec": 30 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_nvme_set_options", 00:23:40.927 "params": { 00:23:40.927 "action_on_timeout": "none", 00:23:40.927 "timeout_us": 0, 00:23:40.927 "timeout_admin_us": 0, 00:23:40.927 "keep_alive_timeout_ms": 10000, 00:23:40.927 "arbitration_burst": 0, 00:23:40.927 "low_priority_weight": 0, 00:23:40.927 "medium_priority_weight": 0, 00:23:40.927 "high_priority_weight": 0, 00:23:40.927 "nvme_adminq_poll_period_us": 10000, 00:23:40.927 "nvme_ioq_poll_period_us": 0, 00:23:40.927 "io_queue_requests": 0, 00:23:40.927 "delay_cmd_submit": true, 00:23:40.927 "transport_retry_count": 4, 00:23:40.927 "bdev_retry_count": 3, 00:23:40.927 "transport_ack_timeout": 0, 00:23:40.927 "ctrlr_loss_timeout_sec": 0, 00:23:40.927 "reconnect_delay_sec": 0, 00:23:40.927 "fast_io_fail_timeout_sec": 0, 00:23:40.927 "disable_auto_failback": false, 00:23:40.927 "generate_uuids": false, 00:23:40.927 "transport_tos": 0, 00:23:40.927 "nvme_error_stat": false, 00:23:40.927 "rdma_srq_size": 0, 00:23:40.927 "io_path_stat": false, 00:23:40.927 "allow_accel_sequence": false, 00:23:40.927 "rdma_max_cq_size": 0, 00:23:40.927 "rdma_cm_event_timeout_ms": 0, 00:23:40.927 "dhchap_digests": [ 00:23:40.927 "sha256", 00:23:40.927 "sha384", 00:23:40.927 "sha512" 00:23:40.927 ], 00:23:40.927 "dhchap_dhgroups": [ 00:23:40.927 "null", 00:23:40.927 "ffdhe2048", 00:23:40.927 "ffdhe3072", 00:23:40.927 "ffdhe4096", 00:23:40.927 "ffdhe6144", 00:23:40.927 "ffdhe8192" 00:23:40.927 ] 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_nvme_set_hotplug", 00:23:40.927 "params": { 00:23:40.927 "period_us": 100000, 00:23:40.927 "enable": false 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_malloc_create", 00:23:40.927 "params": { 00:23:40.927 "name": "malloc0", 00:23:40.927 "num_blocks": 8192, 00:23:40.927 "block_size": 4096, 00:23:40.927 "physical_block_size": 4096, 00:23:40.927 "uuid": "dd89952c-c38a-4ab1-a287-3982f98c5829", 00:23:40.927 "optimal_io_boundary": 0, 00:23:40.927 "md_size": 0, 00:23:40.927 "dif_type": 0, 00:23:40.927 "dif_is_head_of_md": false, 00:23:40.927 "dif_pi_format": 0 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "bdev_wait_for_examine" 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "nbd", 00:23:40.927 "config": [] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "scheduler", 00:23:40.927 "config": [ 00:23:40.927 { 00:23:40.927 "method": "framework_set_scheduler", 00:23:40.927 "params": { 00:23:40.927 "name": "static" 00:23:40.927 } 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "subsystem": "nvmf", 00:23:40.927 "config": [ 00:23:40.927 { 00:23:40.927 "method": "nvmf_set_config", 00:23:40.927 "params": { 00:23:40.927 "discovery_filter": "match_any", 00:23:40.927 "admin_cmd_passthru": { 00:23:40.927 "identify_ctrlr": false 00:23:40.927 } 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_set_max_subsystems", 00:23:40.927 "params": { 00:23:40.927 "max_subsystems": 1024 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_set_crdt", 00:23:40.927 "params": { 00:23:40.927 "crdt1": 0, 00:23:40.927 "crdt2": 0, 00:23:40.927 "crdt3": 0 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_create_transport", 00:23:40.927 "params": { 00:23:40.927 "trtype": "TCP", 00:23:40.927 "max_queue_depth": 128, 00:23:40.927 "max_io_qpairs_per_ctrlr": 127, 00:23:40.927 "in_capsule_data_size": 4096, 00:23:40.927 "max_io_size": 131072, 00:23:40.927 "io_unit_size": 131072, 00:23:40.927 "max_aq_depth": 128, 00:23:40.927 "num_shared_buffers": 511, 00:23:40.927 "buf_cache_size": 4294967295, 00:23:40.927 "dif_insert_or_strip": false, 00:23:40.927 "zcopy": false, 00:23:40.927 "c2h_success": false, 00:23:40.927 "sock_priority": 0, 00:23:40.927 "abort_timeout_sec": 1, 00:23:40.927 "ack_timeout": 0, 00:23:40.927 "data_wr_pool_size": 0 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_create_subsystem", 00:23:40.927 "params": { 00:23:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.927 "allow_any_host": false, 00:23:40.927 "serial_number": "SPDK00000000000001", 00:23:40.927 "model_number": "SPDK bdev Controller", 00:23:40.927 "max_namespaces": 10, 00:23:40.927 "min_cntlid": 1, 00:23:40.927 "max_cntlid": 65519, 00:23:40.927 "ana_reporting": false 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_subsystem_add_host", 00:23:40.927 "params": { 00:23:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.927 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.927 "psk": "/tmp/tmp.SaKNZve6D3" 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_subsystem_add_ns", 00:23:40.927 "params": { 00:23:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.927 "namespace": { 00:23:40.927 "nsid": 1, 00:23:40.927 "bdev_name": "malloc0", 00:23:40.927 "nguid": "DD89952CC38A4AB1A2873982F98C5829", 00:23:40.927 "uuid": "dd89952c-c38a-4ab1-a287-3982f98c5829", 00:23:40.927 "no_auto_visible": false 00:23:40.927 } 00:23:40.927 } 00:23:40.927 }, 00:23:40.927 { 00:23:40.927 "method": "nvmf_subsystem_add_listener", 00:23:40.927 "params": { 00:23:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.927 "listen_address": { 00:23:40.927 "trtype": "TCP", 00:23:40.927 "adrfam": "IPv4", 00:23:40.927 "traddr": "10.0.0.2", 00:23:40.927 "trsvcid": "4420" 00:23:40.927 }, 00:23:40.927 "secure_channel": true 00:23:40.927 } 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 } 00:23:40.927 ] 00:23:40.927 }' 00:23:40.927 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2389259 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2389259 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2389259 ']' 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.928 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.186 [2024-07-23 18:11:48.624550] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:41.186 [2024-07-23 18:11:48.624635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.186 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.186 [2024-07-23 18:11:48.683074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.186 [2024-07-23 18:11:48.764948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.186 [2024-07-23 18:11:48.765007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.186 [2024-07-23 18:11:48.765034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.186 [2024-07-23 18:11:48.765044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.186 [2024-07-23 18:11:48.765054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.186 [2024-07-23 18:11:48.765125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.444 [2024-07-23 18:11:48.991090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.444 [2024-07-23 18:11:49.016899] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:41.444 [2024-07-23 18:11:49.032962] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.444 [2024-07-23 18:11:49.033213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2389411 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2389411 /var/tmp/bdevperf.sock 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2389411 ']' 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.009 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:42.009 "subsystems": [ 00:23:42.009 { 00:23:42.009 "subsystem": "keyring", 00:23:42.009 "config": [] 00:23:42.009 }, 00:23:42.009 { 00:23:42.009 "subsystem": "iobuf", 00:23:42.009 "config": [ 00:23:42.009 { 00:23:42.009 "method": "iobuf_set_options", 00:23:42.009 "params": { 00:23:42.009 "small_pool_count": 8192, 00:23:42.009 "large_pool_count": 1024, 00:23:42.010 "small_bufsize": 8192, 00:23:42.010 "large_bufsize": 135168 00:23:42.010 } 00:23:42.010 } 00:23:42.010 ] 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "subsystem": "sock", 00:23:42.010 "config": [ 00:23:42.010 { 00:23:42.010 "method": "sock_set_default_impl", 00:23:42.010 "params": { 00:23:42.010 "impl_name": "posix" 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "sock_impl_set_options", 00:23:42.010 "params": { 00:23:42.010 "impl_name": "ssl", 00:23:42.010 "recv_buf_size": 4096, 00:23:42.010 "send_buf_size": 4096, 00:23:42.010 "enable_recv_pipe": true, 00:23:42.010 "enable_quickack": false, 00:23:42.010 "enable_placement_id": 0, 00:23:42.010 "enable_zerocopy_send_server": true, 00:23:42.010 "enable_zerocopy_send_client": false, 00:23:42.010 "zerocopy_threshold": 0, 00:23:42.010 "tls_version": 0, 00:23:42.010 "enable_ktls": false 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "sock_impl_set_options", 00:23:42.010 "params": { 00:23:42.010 "impl_name": "posix", 00:23:42.010 "recv_buf_size": 2097152, 00:23:42.010 "send_buf_size": 2097152, 00:23:42.010 "enable_recv_pipe": true, 00:23:42.010 "enable_quickack": false, 00:23:42.010 "enable_placement_id": 0, 00:23:42.010 "enable_zerocopy_send_server": true, 00:23:42.010 "enable_zerocopy_send_client": false, 00:23:42.010 "zerocopy_threshold": 0, 00:23:42.010 "tls_version": 0, 00:23:42.010 "enable_ktls": false 00:23:42.010 } 00:23:42.010 } 00:23:42.010 ] 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "subsystem": "vmd", 00:23:42.010 "config": [] 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "subsystem": "accel", 00:23:42.010 "config": [ 00:23:42.010 { 00:23:42.010 "method": "accel_set_options", 00:23:42.010 "params": { 00:23:42.010 "small_cache_size": 128, 00:23:42.010 "large_cache_size": 16, 00:23:42.010 "task_count": 2048, 00:23:42.010 "sequence_count": 2048, 00:23:42.010 "buf_count": 2048 00:23:42.010 } 00:23:42.010 } 00:23:42.010 ] 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "subsystem": "bdev", 00:23:42.010 "config": [ 00:23:42.010 { 00:23:42.010 "method": "bdev_set_options", 00:23:42.010 "params": { 00:23:42.010 "bdev_io_pool_size": 65535, 00:23:42.010 "bdev_io_cache_size": 256, 00:23:42.010 "bdev_auto_examine": true, 00:23:42.010 "iobuf_small_cache_size": 128, 00:23:42.010 "iobuf_large_cache_size": 16 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_raid_set_options", 00:23:42.010 "params": { 00:23:42.010 "process_window_size_kb": 1024, 00:23:42.010 "process_max_bandwidth_mb_sec": 0 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_iscsi_set_options", 00:23:42.010 "params": { 00:23:42.010 "timeout_sec": 30 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_nvme_set_options", 00:23:42.010 "params": { 00:23:42.010 "action_on_timeout": "none", 00:23:42.010 "timeout_us": 0, 00:23:42.010 "timeout_admin_us": 0, 00:23:42.010 "keep_alive_timeout_ms": 10000, 00:23:42.010 "arbitration_burst": 0, 00:23:42.010 "low_priority_weight": 0, 00:23:42.010 "medium_priority_weight": 0, 00:23:42.010 "high_priority_weight": 0, 00:23:42.010 "nvme_adminq_poll_period_us": 10000, 00:23:42.010 "nvme_ioq_poll_period_us": 0, 00:23:42.010 "io_queue_requests": 512, 00:23:42.010 "delay_cmd_submit": true, 00:23:42.010 "transport_retry_count": 4, 00:23:42.010 "bdev_retry_count": 3, 00:23:42.010 "transport_ack_timeout": 0, 00:23:42.010 "ctrlr_loss_timeout_sec": 0, 00:23:42.010 "reconnect_delay_sec": 0, 00:23:42.010 "fast_io_fail_timeout_sec": 0, 00:23:42.010 "disable_auto_failback": false, 00:23:42.010 "generate_uuids": false, 00:23:42.010 "transport_tos": 0, 00:23:42.010 "nvme_error_stat": false, 00:23:42.010 "rdma_srq_size": 0, 00:23:42.010 "io_path_stat": false, 00:23:42.010 "allow_accel_sequence": false, 00:23:42.010 "rdma_max_cq_size": 0, 00:23:42.010 "rdma_cm_event_timeout_ms": 0, 00:23:42.010 "dhchap_digests": [ 00:23:42.010 "sha256", 00:23:42.010 "sha384", 00:23:42.010 "sha512" 00:23:42.010 ], 00:23:42.010 "dhchap_dhgroups": [ 00:23:42.010 "null", 00:23:42.010 "ffdhe2048", 00:23:42.010 "ffdhe3072", 00:23:42.010 "ffdhe4096", 00:23:42.010 "ffdhe6144", 00:23:42.010 "ffdhe8192" 00:23:42.010 ] 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_nvme_attach_controller", 00:23:42.010 "params": { 00:23:42.010 "name": "TLSTEST", 00:23:42.010 "trtype": "TCP", 00:23:42.010 "adrfam": "IPv4", 00:23:42.010 "traddr": "10.0.0.2", 00:23:42.010 "trsvcid": "4420", 00:23:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.010 "prchk_reftag": false, 00:23:42.010 "prchk_guard": false, 00:23:42.010 "ctrlr_loss_timeout_sec": 0, 00:23:42.010 "reconnect_delay_sec": 0, 00:23:42.010 "fast_io_fail_timeout_sec": 0, 00:23:42.010 "psk": "/tmp/tmp.SaKNZve6D3", 00:23:42.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.010 "hdgst": false, 00:23:42.010 "ddgst": false 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_nvme_set_hotplug", 00:23:42.010 "params": { 00:23:42.010 "period_us": 100000, 00:23:42.010 "enable": false 00:23:42.010 } 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "method": "bdev_wait_for_examine" 00:23:42.010 } 00:23:42.010 ] 00:23:42.010 }, 00:23:42.010 { 00:23:42.010 "subsystem": "nbd", 00:23:42.010 "config": [] 00:23:42.010 } 00:23:42.010 ] 00:23:42.010 }' 00:23:42.010 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.010 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.010 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.010 [2024-07-23 18:11:49.653610] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:42.010 [2024-07-23 18:11:49.653685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389411 ] 00:23:42.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.269 [2024-07-23 18:11:49.712396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.269 [2024-07-23 18:11:49.798509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.526 [2024-07-23 18:11:49.969498] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.527 [2024-07-23 18:11:49.969666] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:43.091 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.091 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.091 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:43.091 Running I/O for 10 seconds... 00:23:55.281 00:23:55.281 Latency(us) 00:23:55.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.281 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.281 Verification LBA range: start 0x0 length 0x2000 00:23:55.281 TLSTESTn1 : 10.03 3226.32 12.60 0.00 0.00 39587.99 6359.42 59807.67 00:23:55.281 =================================================================================================================== 00:23:55.281 Total : 3226.32 12.60 0.00 0.00 39587.99 6359.42 59807.67 00:23:55.281 0 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2389411 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2389411 ']' 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2389411 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2389411 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2389411' 00:23:55.282 killing process with pid 2389411 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2389411 00:23:55.282 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.282 00:23:55.282 Latency(us) 00:23:55.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.282 =================================================================================================================== 00:23:55.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.282 [2024-07-23 18:12:00.847594] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:55.282 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2389411 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2389259 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2389259 ']' 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2389259 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2389259 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2389259' 00:23:55.282 killing process with pid 2389259 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2389259 00:23:55.282 [2024-07-23 18:12:01.089384] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2389259 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2390836 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2390836 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2390836 ']' 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.282 [2024-07-23 18:12:01.375865] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:55.282 [2024-07-23 18:12:01.375944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.282 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.282 [2024-07-23 18:12:01.439419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.282 [2024-07-23 18:12:01.525897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.282 [2024-07-23 18:12:01.525954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.282 [2024-07-23 18:12:01.525981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.282 [2024-07-23 18:12:01.525992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.282 [2024-07-23 18:12:01.526002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.282 [2024-07-23 18:12:01.526035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.SaKNZve6D3 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SaKNZve6D3 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.282 [2024-07-23 18:12:01.942747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.282 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.282 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.282 [2024-07-23 18:12:02.444075] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.282 [2024-07-23 18:12:02.444357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.282 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.282 malloc0 00:23:55.282 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.539 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaKNZve6D3 00:23:55.797 [2024-07-23 18:12:03.341920] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2391151 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2391151 /var/tmp/bdevperf.sock 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2391151 ']' 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.797 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.797 [2024-07-23 18:12:03.403915] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:55.797 [2024-07-23 18:12:03.403984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391151 ] 00:23:55.797 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.057 [2024-07-23 18:12:03.463524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.057 [2024-07-23 18:12:03.556571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.057 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.057 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:56.057 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SaKNZve6D3 00:23:56.316 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.573 [2024-07-23 18:12:04.213731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.830 nvme0n1 00:23:56.830 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.830 Running I/O for 1 seconds... 00:23:58.202 00:23:58.202 Latency(us) 00:23:58.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.202 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.202 Verification LBA range: start 0x0 length 0x2000 00:23:58.202 nvme0n1 : 1.03 3590.76 14.03 0.00 0.00 35122.87 5776.88 31845.64 00:23:58.202 =================================================================================================================== 00:23:58.202 Total : 3590.76 14.03 0.00 0.00 35122.87 5776.88 31845.64 00:23:58.202 0 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2391151 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2391151 ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2391151 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391151 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391151' 00:23:58.202 killing process with pid 2391151 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2391151 00:23:58.202 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.202 00:23:58.202 Latency(us) 00:23:58.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.202 =================================================================================================================== 00:23:58.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2391151 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2390836 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2390836 ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2390836 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390836 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390836' 00:23:58.202 killing process with pid 2390836 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2390836 00:23:58.202 [2024-07-23 18:12:05.753020] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.202 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2390836 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2391439 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2391439 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2391439 ']' 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.461 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.461 [2024-07-23 18:12:06.023881] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:58.461 [2024-07-23 18:12:06.023955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.461 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.461 [2024-07-23 18:12:06.087207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.719 [2024-07-23 18:12:06.173567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.719 [2024-07-23 18:12:06.173637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.719 [2024-07-23 18:12:06.173666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.719 [2024-07-23 18:12:06.173678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.719 [2024-07-23 18:12:06.173688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.719 [2024-07-23 18:12:06.173721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.719 [2024-07-23 18:12:06.302028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.719 malloc0 00:23:58.719 [2024-07-23 18:12:06.332590] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.719 [2024-07-23 18:12:06.340516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2391459 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2391459 /var/tmp/bdevperf.sock 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2391459 ']' 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.719 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.977 [2024-07-23 18:12:06.406616] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:23:58.977 [2024-07-23 18:12:06.406701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391459 ] 00:23:58.977 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.977 [2024-07-23 18:12:06.465970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.977 [2024-07-23 18:12:06.555792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.235 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.235 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:59.235 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SaKNZve6D3 00:23:59.493 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:59.493 [2024-07-23 18:12:07.135360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.750 nvme0n1 00:23:59.750 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.750 Running I/O for 1 seconds... 00:24:01.123 00:24:01.123 Latency(us) 00:24:01.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.123 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:01.123 Verification LBA range: start 0x0 length 0x2000 00:24:01.123 nvme0n1 : 1.02 3511.33 13.72 0.00 0.00 36067.88 6213.78 38641.97 00:24:01.123 =================================================================================================================== 00:24:01.123 Total : 3511.33 13.72 0.00 0.00 36067.88 6213.78 38641.97 00:24:01.123 0 00:24:01.123 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:24:01.123 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.123 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.123 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.123 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:24:01.123 "subsystems": [ 00:24:01.123 { 00:24:01.123 "subsystem": "keyring", 00:24:01.123 "config": [ 00:24:01.123 { 00:24:01.123 "method": "keyring_file_add_key", 00:24:01.123 "params": { 00:24:01.123 "name": "key0", 00:24:01.123 "path": "/tmp/tmp.SaKNZve6D3" 00:24:01.123 } 00:24:01.123 } 00:24:01.123 ] 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "subsystem": "iobuf", 00:24:01.123 "config": [ 00:24:01.123 { 00:24:01.123 "method": "iobuf_set_options", 00:24:01.123 "params": { 00:24:01.123 "small_pool_count": 8192, 00:24:01.123 "large_pool_count": 1024, 00:24:01.123 "small_bufsize": 8192, 00:24:01.123 "large_bufsize": 135168 00:24:01.123 } 00:24:01.123 } 00:24:01.123 ] 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "subsystem": "sock", 00:24:01.123 "config": [ 00:24:01.123 { 00:24:01.123 "method": "sock_set_default_impl", 00:24:01.123 "params": { 00:24:01.123 "impl_name": "posix" 00:24:01.123 } 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "method": "sock_impl_set_options", 00:24:01.123 "params": { 00:24:01.123 "impl_name": "ssl", 00:24:01.123 "recv_buf_size": 4096, 00:24:01.123 "send_buf_size": 4096, 00:24:01.123 "enable_recv_pipe": true, 00:24:01.123 "enable_quickack": false, 00:24:01.123 "enable_placement_id": 0, 00:24:01.123 "enable_zerocopy_send_server": true, 00:24:01.123 "enable_zerocopy_send_client": false, 00:24:01.123 "zerocopy_threshold": 0, 00:24:01.123 "tls_version": 0, 00:24:01.123 "enable_ktls": false 00:24:01.123 } 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "method": "sock_impl_set_options", 00:24:01.123 "params": { 00:24:01.123 "impl_name": "posix", 00:24:01.123 "recv_buf_size": 2097152, 00:24:01.123 "send_buf_size": 2097152, 00:24:01.123 "enable_recv_pipe": true, 00:24:01.123 "enable_quickack": false, 00:24:01.123 "enable_placement_id": 0, 00:24:01.123 "enable_zerocopy_send_server": true, 00:24:01.123 "enable_zerocopy_send_client": false, 00:24:01.123 "zerocopy_threshold": 0, 00:24:01.123 "tls_version": 0, 00:24:01.123 "enable_ktls": false 00:24:01.123 } 00:24:01.123 } 00:24:01.123 ] 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "subsystem": "vmd", 00:24:01.123 "config": [] 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "subsystem": "accel", 00:24:01.123 "config": [ 00:24:01.123 { 00:24:01.123 "method": "accel_set_options", 00:24:01.123 "params": { 00:24:01.123 "small_cache_size": 128, 00:24:01.123 "large_cache_size": 16, 00:24:01.123 "task_count": 2048, 00:24:01.123 "sequence_count": 2048, 00:24:01.123 "buf_count": 2048 00:24:01.123 } 00:24:01.123 } 00:24:01.123 ] 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "subsystem": "bdev", 00:24:01.123 "config": [ 00:24:01.123 { 00:24:01.123 "method": "bdev_set_options", 00:24:01.123 "params": { 00:24:01.123 "bdev_io_pool_size": 65535, 00:24:01.123 "bdev_io_cache_size": 256, 00:24:01.123 "bdev_auto_examine": true, 00:24:01.123 "iobuf_small_cache_size": 128, 00:24:01.123 "iobuf_large_cache_size": 16 00:24:01.123 } 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "method": "bdev_raid_set_options", 00:24:01.123 "params": { 00:24:01.123 "process_window_size_kb": 1024, 00:24:01.123 "process_max_bandwidth_mb_sec": 0 00:24:01.123 } 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "method": "bdev_iscsi_set_options", 00:24:01.123 "params": { 00:24:01.123 "timeout_sec": 30 00:24:01.123 } 00:24:01.123 }, 00:24:01.123 { 00:24:01.123 "method": "bdev_nvme_set_options", 00:24:01.123 "params": { 00:24:01.123 "action_on_timeout": "none", 00:24:01.123 "timeout_us": 0, 00:24:01.123 "timeout_admin_us": 0, 00:24:01.123 "keep_alive_timeout_ms": 10000, 00:24:01.123 "arbitration_burst": 0, 00:24:01.123 "low_priority_weight": 0, 00:24:01.123 "medium_priority_weight": 0, 00:24:01.123 "high_priority_weight": 0, 00:24:01.123 "nvme_adminq_poll_period_us": 10000, 00:24:01.123 "nvme_ioq_poll_period_us": 0, 00:24:01.123 "io_queue_requests": 0, 00:24:01.123 "delay_cmd_submit": true, 00:24:01.123 "transport_retry_count": 4, 00:24:01.123 "bdev_retry_count": 3, 00:24:01.123 "transport_ack_timeout": 0, 00:24:01.123 "ctrlr_loss_timeout_sec": 0, 00:24:01.123 "reconnect_delay_sec": 0, 00:24:01.123 "fast_io_fail_timeout_sec": 0, 00:24:01.123 "disable_auto_failback": false, 00:24:01.123 "generate_uuids": false, 00:24:01.123 "transport_tos": 0, 00:24:01.123 "nvme_error_stat": false, 00:24:01.123 "rdma_srq_size": 0, 00:24:01.123 "io_path_stat": false, 00:24:01.123 "allow_accel_sequence": false, 00:24:01.123 "rdma_max_cq_size": 0, 00:24:01.123 "rdma_cm_event_timeout_ms": 0, 00:24:01.123 "dhchap_digests": [ 00:24:01.123 "sha256", 00:24:01.123 "sha384", 00:24:01.123 "sha512" 00:24:01.123 ], 00:24:01.124 "dhchap_dhgroups": [ 00:24:01.124 "null", 00:24:01.124 "ffdhe2048", 00:24:01.124 "ffdhe3072", 00:24:01.124 "ffdhe4096", 00:24:01.124 "ffdhe6144", 00:24:01.124 "ffdhe8192" 00:24:01.124 ] 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "bdev_nvme_set_hotplug", 00:24:01.124 "params": { 00:24:01.124 "period_us": 100000, 00:24:01.124 "enable": false 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "bdev_malloc_create", 00:24:01.124 "params": { 00:24:01.124 "name": "malloc0", 00:24:01.124 "num_blocks": 8192, 00:24:01.124 "block_size": 4096, 00:24:01.124 "physical_block_size": 4096, 00:24:01.124 "uuid": "02d47fc6-dc2e-4d76-bfae-d39918e51e8f", 00:24:01.124 "optimal_io_boundary": 0, 00:24:01.124 "md_size": 0, 00:24:01.124 "dif_type": 0, 00:24:01.124 "dif_is_head_of_md": false, 00:24:01.124 "dif_pi_format": 0 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "bdev_wait_for_examine" 00:24:01.124 } 00:24:01.124 ] 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "subsystem": "nbd", 00:24:01.124 "config": [] 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "subsystem": "scheduler", 00:24:01.124 "config": [ 00:24:01.124 { 00:24:01.124 "method": "framework_set_scheduler", 00:24:01.124 "params": { 00:24:01.124 "name": "static" 00:24:01.124 } 00:24:01.124 } 00:24:01.124 ] 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "subsystem": "nvmf", 00:24:01.124 "config": [ 00:24:01.124 { 00:24:01.124 "method": "nvmf_set_config", 00:24:01.124 "params": { 00:24:01.124 "discovery_filter": "match_any", 00:24:01.124 "admin_cmd_passthru": { 00:24:01.124 "identify_ctrlr": false 00:24:01.124 } 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_set_max_subsystems", 00:24:01.124 "params": { 00:24:01.124 "max_subsystems": 1024 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_set_crdt", 00:24:01.124 "params": { 00:24:01.124 "crdt1": 0, 00:24:01.124 "crdt2": 0, 00:24:01.124 "crdt3": 0 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_create_transport", 00:24:01.124 "params": { 00:24:01.124 "trtype": "TCP", 00:24:01.124 "max_queue_depth": 128, 00:24:01.124 "max_io_qpairs_per_ctrlr": 127, 00:24:01.124 "in_capsule_data_size": 4096, 00:24:01.124 "max_io_size": 131072, 00:24:01.124 "io_unit_size": 131072, 00:24:01.124 "max_aq_depth": 128, 00:24:01.124 "num_shared_buffers": 511, 00:24:01.124 "buf_cache_size": 4294967295, 00:24:01.124 "dif_insert_or_strip": false, 00:24:01.124 "zcopy": false, 00:24:01.124 "c2h_success": false, 00:24:01.124 "sock_priority": 0, 00:24:01.124 "abort_timeout_sec": 1, 00:24:01.124 "ack_timeout": 0, 00:24:01.124 "data_wr_pool_size": 0 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_create_subsystem", 00:24:01.124 "params": { 00:24:01.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.124 "allow_any_host": false, 00:24:01.124 "serial_number": "00000000000000000000", 00:24:01.124 "model_number": "SPDK bdev Controller", 00:24:01.124 "max_namespaces": 32, 00:24:01.124 "min_cntlid": 1, 00:24:01.124 "max_cntlid": 65519, 00:24:01.124 "ana_reporting": false 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_subsystem_add_host", 00:24:01.124 "params": { 00:24:01.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.124 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.124 "psk": "key0" 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_subsystem_add_ns", 00:24:01.124 "params": { 00:24:01.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.124 "namespace": { 00:24:01.124 "nsid": 1, 00:24:01.124 "bdev_name": "malloc0", 00:24:01.124 "nguid": "02D47FC6DC2E4D76BFAED39918E51E8F", 00:24:01.124 "uuid": "02d47fc6-dc2e-4d76-bfae-d39918e51e8f", 00:24:01.124 "no_auto_visible": false 00:24:01.124 } 00:24:01.124 } 00:24:01.124 }, 00:24:01.124 { 00:24:01.124 "method": "nvmf_subsystem_add_listener", 00:24:01.124 "params": { 00:24:01.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.124 "listen_address": { 00:24:01.124 "trtype": "TCP", 00:24:01.124 "adrfam": "IPv4", 00:24:01.124 "traddr": "10.0.0.2", 00:24:01.124 "trsvcid": "4420" 00:24:01.124 }, 00:24:01.124 "secure_channel": false, 00:24:01.124 "sock_impl": "ssl" 00:24:01.124 } 00:24:01.124 } 00:24:01.124 ] 00:24:01.124 } 00:24:01.124 ] 00:24:01.124 }' 00:24:01.124 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:01.382 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:24:01.382 "subsystems": [ 00:24:01.382 { 00:24:01.382 "subsystem": "keyring", 00:24:01.382 "config": [ 00:24:01.382 { 00:24:01.382 "method": "keyring_file_add_key", 00:24:01.382 "params": { 00:24:01.382 "name": "key0", 00:24:01.382 "path": "/tmp/tmp.SaKNZve6D3" 00:24:01.382 } 00:24:01.382 } 00:24:01.382 ] 00:24:01.382 }, 00:24:01.382 { 00:24:01.382 "subsystem": "iobuf", 00:24:01.382 "config": [ 00:24:01.382 { 00:24:01.382 "method": "iobuf_set_options", 00:24:01.382 "params": { 00:24:01.382 "small_pool_count": 8192, 00:24:01.382 "large_pool_count": 1024, 00:24:01.382 "small_bufsize": 8192, 00:24:01.382 "large_bufsize": 135168 00:24:01.382 } 00:24:01.382 } 00:24:01.382 ] 00:24:01.382 }, 00:24:01.382 { 00:24:01.382 "subsystem": "sock", 00:24:01.382 "config": [ 00:24:01.382 { 00:24:01.382 "method": "sock_set_default_impl", 00:24:01.382 "params": { 00:24:01.382 "impl_name": "posix" 00:24:01.382 } 00:24:01.382 }, 00:24:01.383 { 00:24:01.383 "method": "sock_impl_set_options", 00:24:01.383 "params": { 00:24:01.383 "impl_name": "ssl", 00:24:01.383 "recv_buf_size": 4096, 00:24:01.383 "send_buf_size": 4096, 00:24:01.383 "enable_recv_pipe": true, 00:24:01.383 "enable_quickack": false, 00:24:01.383 "enable_placement_id": 0, 00:24:01.383 "enable_zerocopy_send_server": true, 00:24:01.383 "enable_zerocopy_send_client": false, 00:24:01.383 "zerocopy_threshold": 0, 00:24:01.383 "tls_version": 0, 00:24:01.383 "enable_ktls": false 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "sock_impl_set_options", 00:24:01.383 "params": { 00:24:01.383 "impl_name": "posix", 00:24:01.383 "recv_buf_size": 2097152, 00:24:01.383 "send_buf_size": 2097152, 00:24:01.383 "enable_recv_pipe": true, 00:24:01.383 "enable_quickack": false, 00:24:01.383 "enable_placement_id": 0, 00:24:01.383 "enable_zerocopy_send_server": true, 00:24:01.383 "enable_zerocopy_send_client": false, 00:24:01.383 "zerocopy_threshold": 0, 00:24:01.383 "tls_version": 0, 00:24:01.383 "enable_ktls": false 00:24:01.383 } 00:24:01.383 } 00:24:01.383 ] 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "subsystem": "vmd", 00:24:01.383 "config": [] 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "subsystem": "accel", 00:24:01.383 "config": [ 00:24:01.383 { 00:24:01.383 "method": "accel_set_options", 00:24:01.383 "params": { 00:24:01.383 "small_cache_size": 128, 00:24:01.383 "large_cache_size": 16, 00:24:01.383 "task_count": 2048, 00:24:01.383 "sequence_count": 2048, 00:24:01.383 "buf_count": 2048 00:24:01.383 } 00:24:01.383 } 00:24:01.383 ] 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "subsystem": "bdev", 00:24:01.383 "config": [ 00:24:01.383 { 00:24:01.383 "method": "bdev_set_options", 00:24:01.383 "params": { 00:24:01.383 "bdev_io_pool_size": 65535, 00:24:01.383 "bdev_io_cache_size": 256, 00:24:01.383 "bdev_auto_examine": true, 00:24:01.383 "iobuf_small_cache_size": 128, 00:24:01.383 "iobuf_large_cache_size": 16 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_raid_set_options", 00:24:01.383 "params": { 00:24:01.383 "process_window_size_kb": 1024, 00:24:01.383 "process_max_bandwidth_mb_sec": 0 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_iscsi_set_options", 00:24:01.383 "params": { 00:24:01.383 "timeout_sec": 30 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_nvme_set_options", 00:24:01.383 "params": { 00:24:01.383 "action_on_timeout": "none", 00:24:01.383 "timeout_us": 0, 00:24:01.383 "timeout_admin_us": 0, 00:24:01.383 "keep_alive_timeout_ms": 10000, 00:24:01.383 "arbitration_burst": 0, 00:24:01.383 "low_priority_weight": 0, 00:24:01.383 "medium_priority_weight": 0, 00:24:01.383 "high_priority_weight": 0, 00:24:01.383 "nvme_adminq_poll_period_us": 10000, 00:24:01.383 "nvme_ioq_poll_period_us": 0, 00:24:01.383 "io_queue_requests": 512, 00:24:01.383 "delay_cmd_submit": true, 00:24:01.383 "transport_retry_count": 4, 00:24:01.383 "bdev_retry_count": 3, 00:24:01.383 "transport_ack_timeout": 0, 00:24:01.383 "ctrlr_loss_timeout_sec": 0, 00:24:01.383 "reconnect_delay_sec": 0, 00:24:01.383 "fast_io_fail_timeout_sec": 0, 00:24:01.383 "disable_auto_failback": false, 00:24:01.383 "generate_uuids": false, 00:24:01.383 "transport_tos": 0, 00:24:01.383 "nvme_error_stat": false, 00:24:01.383 "rdma_srq_size": 0, 00:24:01.383 "io_path_stat": false, 00:24:01.383 "allow_accel_sequence": false, 00:24:01.383 "rdma_max_cq_size": 0, 00:24:01.383 "rdma_cm_event_timeout_ms": 0, 00:24:01.383 "dhchap_digests": [ 00:24:01.383 "sha256", 00:24:01.383 "sha384", 00:24:01.383 "sha512" 00:24:01.383 ], 00:24:01.383 "dhchap_dhgroups": [ 00:24:01.383 "null", 00:24:01.383 "ffdhe2048", 00:24:01.383 "ffdhe3072", 00:24:01.383 "ffdhe4096", 00:24:01.383 "ffdhe6144", 00:24:01.383 "ffdhe8192" 00:24:01.383 ] 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_nvme_attach_controller", 00:24:01.383 "params": { 00:24:01.383 "name": "nvme0", 00:24:01.383 "trtype": "TCP", 00:24:01.383 "adrfam": "IPv4", 00:24:01.383 "traddr": "10.0.0.2", 00:24:01.383 "trsvcid": "4420", 00:24:01.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.383 "prchk_reftag": false, 00:24:01.383 "prchk_guard": false, 00:24:01.383 "ctrlr_loss_timeout_sec": 0, 00:24:01.383 "reconnect_delay_sec": 0, 00:24:01.383 "fast_io_fail_timeout_sec": 0, 00:24:01.383 "psk": "key0", 00:24:01.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.383 "hdgst": false, 00:24:01.383 "ddgst": false 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_nvme_set_hotplug", 00:24:01.383 "params": { 00:24:01.383 "period_us": 100000, 00:24:01.383 "enable": false 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_enable_histogram", 00:24:01.383 "params": { 00:24:01.383 "name": "nvme0n1", 00:24:01.383 "enable": true 00:24:01.383 } 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "method": "bdev_wait_for_examine" 00:24:01.383 } 00:24:01.383 ] 00:24:01.383 }, 00:24:01.383 { 00:24:01.383 "subsystem": "nbd", 00:24:01.383 "config": [] 00:24:01.383 } 00:24:01.383 ] 00:24:01.383 }' 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2391459 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2391459 ']' 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2391459 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391459 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391459' 00:24:01.383 killing process with pid 2391459 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2391459 00:24:01.383 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.383 00:24:01.383 Latency(us) 00:24:01.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.383 =================================================================================================================== 00:24:01.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.383 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2391459 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2391439 ']' 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391439' 00:24:01.642 killing process with pid 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2391439 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.642 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:24:01.642 "subsystems": [ 00:24:01.642 { 00:24:01.642 "subsystem": "keyring", 00:24:01.642 "config": [ 00:24:01.642 { 00:24:01.642 "method": "keyring_file_add_key", 00:24:01.642 "params": { 00:24:01.642 "name": "key0", 00:24:01.642 "path": "/tmp/tmp.SaKNZve6D3" 00:24:01.642 } 00:24:01.642 } 00:24:01.642 ] 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "subsystem": "iobuf", 00:24:01.642 "config": [ 00:24:01.642 { 00:24:01.642 "method": "iobuf_set_options", 00:24:01.642 "params": { 00:24:01.642 "small_pool_count": 8192, 00:24:01.642 "large_pool_count": 1024, 00:24:01.642 "small_bufsize": 8192, 00:24:01.642 "large_bufsize": 135168 00:24:01.642 } 00:24:01.642 } 00:24:01.642 ] 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "subsystem": "sock", 00:24:01.642 "config": [ 00:24:01.642 { 00:24:01.642 "method": "sock_set_default_impl", 00:24:01.642 "params": { 00:24:01.642 "impl_name": "posix" 00:24:01.642 } 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "method": "sock_impl_set_options", 00:24:01.642 "params": { 00:24:01.642 "impl_name": "ssl", 00:24:01.642 "recv_buf_size": 4096, 00:24:01.642 "send_buf_size": 4096, 00:24:01.642 "enable_recv_pipe": true, 00:24:01.642 "enable_quickack": false, 00:24:01.642 "enable_placement_id": 0, 00:24:01.642 "enable_zerocopy_send_server": true, 00:24:01.642 "enable_zerocopy_send_client": false, 00:24:01.642 "zerocopy_threshold": 0, 00:24:01.642 "tls_version": 0, 00:24:01.642 "enable_ktls": false 00:24:01.642 } 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "method": "sock_impl_set_options", 00:24:01.642 "params": { 00:24:01.642 "impl_name": "posix", 00:24:01.642 "recv_buf_size": 2097152, 00:24:01.642 "send_buf_size": 2097152, 00:24:01.642 "enable_recv_pipe": true, 00:24:01.642 "enable_quickack": false, 00:24:01.642 "enable_placement_id": 0, 00:24:01.642 "enable_zerocopy_send_server": true, 00:24:01.642 "enable_zerocopy_send_client": false, 00:24:01.642 "zerocopy_threshold": 0, 00:24:01.642 "tls_version": 0, 00:24:01.642 "enable_ktls": false 00:24:01.642 } 00:24:01.642 } 00:24:01.642 ] 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "subsystem": "vmd", 00:24:01.642 "config": [] 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "subsystem": "accel", 00:24:01.642 "config": [ 00:24:01.642 { 00:24:01.642 "method": "accel_set_options", 00:24:01.642 "params": { 00:24:01.642 "small_cache_size": 128, 00:24:01.642 "large_cache_size": 16, 00:24:01.642 "task_count": 2048, 00:24:01.642 "sequence_count": 2048, 00:24:01.642 "buf_count": 2048 00:24:01.642 } 00:24:01.642 } 00:24:01.642 ] 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "subsystem": "bdev", 00:24:01.642 "config": [ 00:24:01.642 { 00:24:01.642 "method": "bdev_set_options", 00:24:01.642 "params": { 00:24:01.642 "bdev_io_pool_size": 65535, 00:24:01.642 "bdev_io_cache_size": 256, 00:24:01.642 "bdev_auto_examine": true, 00:24:01.642 "iobuf_small_cache_size": 128, 00:24:01.642 "iobuf_large_cache_size": 16 00:24:01.642 } 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "method": "bdev_raid_set_options", 00:24:01.642 "params": { 00:24:01.642 "process_window_size_kb": 1024, 00:24:01.642 "process_max_bandwidth_mb_sec": 0 00:24:01.642 } 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "method": "bdev_iscsi_set_options", 00:24:01.642 "params": { 00:24:01.642 "timeout_sec": 30 00:24:01.642 } 00:24:01.642 }, 00:24:01.642 { 00:24:01.642 "method": "bdev_nvme_set_options", 00:24:01.642 "params": { 00:24:01.642 "action_on_timeout": "none", 00:24:01.642 "timeout_us": 0, 00:24:01.642 "timeout_admin_us": 0, 00:24:01.642 "keep_alive_timeout_ms": 10000, 00:24:01.642 "arbitration_burst": 0, 00:24:01.643 "low_priority_weight": 0, 00:24:01.643 "medium_priority_weight": 0, 00:24:01.643 "high_priority_weight": 0, 00:24:01.643 "nvme_adminq_poll_period_us": 10000, 00:24:01.643 "nvme_ioq_poll_period_us": 0, 00:24:01.643 "io_queue_requests": 0, 00:24:01.643 "delay_cmd_submit": true, 00:24:01.643 "transport_retry_count": 4, 00:24:01.643 "bdev_retry_count": 3, 00:24:01.643 "transport_ack_timeout": 0, 00:24:01.643 "ctrlr_loss_timeout_sec": 0, 00:24:01.643 "reconnect_delay_sec": 0, 00:24:01.643 "fast_io_fail_timeout_sec": 0, 00:24:01.643 "disable_auto_failback": false, 00:24:01.643 "generate_uuids": false, 00:24:01.643 "transport_tos": 0, 00:24:01.643 "nvme_error_stat": false, 00:24:01.643 "rdma_srq_size": 0, 00:24:01.643 "io_path_stat": false, 00:24:01.643 "allow_accel_sequence": false, 00:24:01.643 "rdma_max_cq_size": 0, 00:24:01.643 "rdma_cm_event_timeout_ms": 0, 00:24:01.643 "dhchap_digests": [ 00:24:01.643 "sha256", 00:24:01.643 "sha384", 00:24:01.643 "sha512" 00:24:01.643 ], 00:24:01.643 "dhchap_dhgroups": [ 00:24:01.643 "null", 00:24:01.643 "ffdhe2048", 00:24:01.643 "ffdhe3072", 00:24:01.643 "ffdhe4096", 00:24:01.643 "ffdhe6144", 00:24:01.643 "ffdhe8192" 00:24:01.643 ] 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "bdev_nvme_set_hotplug", 00:24:01.643 "params": { 00:24:01.643 "period_us": 100000, 00:24:01.643 "enable": false 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "bdev_malloc_create", 00:24:01.643 "params": { 00:24:01.643 "name": "malloc0", 00:24:01.643 "num_blocks": 8192, 00:24:01.643 "block_size": 4096, 00:24:01.643 "physical_block_size": 4096, 00:24:01.643 "uuid": "02d47fc6-dc2e-4d76-bfae-d39918e51e8f", 00:24:01.643 "optimal_io_boundary": 0, 00:24:01.643 "md_size": 0, 00:24:01.643 "dif_type": 0, 00:24:01.643 "dif_is_head_of_md": false, 00:24:01.643 "dif_pi_format": 0 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "bdev_wait_for_examine" 00:24:01.643 } 00:24:01.643 ] 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "subsystem": "nbd", 00:24:01.643 "config": [] 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "subsystem": "scheduler", 00:24:01.643 "config": [ 00:24:01.643 { 00:24:01.643 "method": "framework_set_scheduler", 00:24:01.643 "params": { 00:24:01.643 "name": "static" 00:24:01.643 } 00:24:01.643 } 00:24:01.643 ] 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "subsystem": "nvmf", 00:24:01.643 "config": [ 00:24:01.643 { 00:24:01.643 "method": "nvmf_set_config", 00:24:01.643 "params": { 00:24:01.643 "discovery_filter": "match_any", 00:24:01.643 "admin_cmd_passthru": { 00:24:01.643 "identify_ctrlr": false 00:24:01.643 } 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_set_max_subsystems", 00:24:01.643 "params": { 00:24:01.643 "max_subsystems": 1024 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_set_crdt", 00:24:01.643 "params": { 00:24:01.643 "crdt1": 0, 00:24:01.643 "crdt2": 0, 00:24:01.643 "crdt3": 0 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_create_transport", 00:24:01.643 "params": { 00:24:01.643 "trtype": "TCP", 00:24:01.643 "max_queue_depth": 128, 00:24:01.643 "max_io_qpairs_per_ctrlr": 127, 00:24:01.643 "in_capsule_data_size": 4096, 00:24:01.643 "max_io_size": 131072, 00:24:01.643 "io_unit_size": 131072, 00:24:01.643 "max_aq_depth": 128, 00:24:01.643 "num_shared_buffers": 511, 00:24:01.643 "buf_cache_size": 4294967295, 00:24:01.643 "dif_insert_or_strip": false, 00:24:01.643 "zcopy": false, 00:24:01.643 "c2h_success": false, 00:24:01.643 "sock_priority": 0, 00:24:01.643 "abort_timeout_sec": 1, 00:24:01.643 "ack_timeout": 0, 00:24:01.643 "data_wr_pool_size": 0 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_create_subsystem", 00:24:01.643 "params": { 00:24:01.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.643 "allow_any_host": false, 00:24:01.643 "serial_number": "00000000000000000000", 00:24:01.643 "model_number": "SPDK bdev Controller", 00:24:01.643 "max_namespaces": 32, 00:24:01.643 "min_cntlid": 1, 00:24:01.643 "max_cntlid": 65519, 00:24:01.643 "ana_reporting": false 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_subsystem_add_host", 00:24:01.643 "params": { 00:24:01.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.643 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.643 "psk": "key0" 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_subsystem_add_ns", 00:24:01.643 "params": { 00:24:01.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.643 "namespace": { 00:24:01.643 "nsid": 1, 00:24:01.643 "bdev_name": "malloc0", 00:24:01.643 "nguid": "02D47FC6DC2E4D76BFAED39918E51E8F", 00:24:01.643 "uuid": "02d47fc6-dc2e-4d76-bfae-d39918e51e8f", 00:24:01.643 "no_auto_visible": false 00:24:01.643 } 00:24:01.643 } 00:24:01.643 }, 00:24:01.643 { 00:24:01.643 "method": "nvmf_subsystem_add_listener", 00:24:01.643 "params": { 00:24:01.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.643 "listen_address": { 00:24:01.643 "trtype": "TCP", 00:24:01.643 "adrfam": "IPv4", 00:24:01.643 "traddr": "10.0.0.2", 00:24:01.643 "trsvcid": "4420" 00:24:01.643 }, 00:24:01.643 "secure_channel": false, 00:24:01.643 "sock_impl": "ssl" 00:24:01.643 } 00:24:01.643 } 00:24:01.643 ] 00:24:01.643 } 00:24:01.643 ] 00:24:01.643 }' 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2392360 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2392360 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2392360 ']' 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.643 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.902 [2024-07-23 18:12:09.340263] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:24:01.902 [2024-07-23 18:12:09.340399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.902 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.902 [2024-07-23 18:12:09.405057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.902 [2024-07-23 18:12:09.491720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.902 [2024-07-23 18:12:09.491779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.902 [2024-07-23 18:12:09.491807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.902 [2024-07-23 18:12:09.491818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.902 [2024-07-23 18:12:09.491828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.902 [2024-07-23 18:12:09.491913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.160 [2024-07-23 18:12:09.714806] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.160 [2024-07-23 18:12:09.757882] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.160 [2024-07-23 18:12:09.758119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2392518 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2392518 /var/tmp/bdevperf.sock 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2392518 ']' 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.726 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:24:02.726 "subsystems": [ 00:24:02.726 { 00:24:02.726 "subsystem": "keyring", 00:24:02.726 "config": [ 00:24:02.726 { 00:24:02.726 "method": "keyring_file_add_key", 00:24:02.726 "params": { 00:24:02.726 "name": "key0", 00:24:02.726 "path": "/tmp/tmp.SaKNZve6D3" 00:24:02.726 } 00:24:02.726 } 00:24:02.726 ] 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "subsystem": "iobuf", 00:24:02.726 "config": [ 00:24:02.726 { 00:24:02.726 "method": "iobuf_set_options", 00:24:02.726 "params": { 00:24:02.726 "small_pool_count": 8192, 00:24:02.726 "large_pool_count": 1024, 00:24:02.726 "small_bufsize": 8192, 00:24:02.726 "large_bufsize": 135168 00:24:02.726 } 00:24:02.726 } 00:24:02.726 ] 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "subsystem": "sock", 00:24:02.726 "config": [ 00:24:02.726 { 00:24:02.726 "method": "sock_set_default_impl", 00:24:02.726 "params": { 00:24:02.726 "impl_name": "posix" 00:24:02.726 } 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "method": "sock_impl_set_options", 00:24:02.726 "params": { 00:24:02.726 "impl_name": "ssl", 00:24:02.726 "recv_buf_size": 4096, 00:24:02.726 "send_buf_size": 4096, 00:24:02.726 "enable_recv_pipe": true, 00:24:02.726 "enable_quickack": false, 00:24:02.726 "enable_placement_id": 0, 00:24:02.726 "enable_zerocopy_send_server": true, 00:24:02.726 "enable_zerocopy_send_client": false, 00:24:02.726 "zerocopy_threshold": 0, 00:24:02.726 "tls_version": 0, 00:24:02.726 "enable_ktls": false 00:24:02.726 } 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "method": "sock_impl_set_options", 00:24:02.726 "params": { 00:24:02.726 "impl_name": "posix", 00:24:02.726 "recv_buf_size": 2097152, 00:24:02.726 "send_buf_size": 2097152, 00:24:02.726 "enable_recv_pipe": true, 00:24:02.726 "enable_quickack": false, 00:24:02.726 "enable_placement_id": 0, 00:24:02.726 "enable_zerocopy_send_server": true, 00:24:02.726 "enable_zerocopy_send_client": false, 00:24:02.726 "zerocopy_threshold": 0, 00:24:02.726 "tls_version": 0, 00:24:02.726 "enable_ktls": false 00:24:02.726 } 00:24:02.726 } 00:24:02.726 ] 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "subsystem": "vmd", 00:24:02.726 "config": [] 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "subsystem": "accel", 00:24:02.726 "config": [ 00:24:02.726 { 00:24:02.726 "method": "accel_set_options", 00:24:02.726 "params": { 00:24:02.726 "small_cache_size": 128, 00:24:02.726 "large_cache_size": 16, 00:24:02.726 "task_count": 2048, 00:24:02.726 "sequence_count": 2048, 00:24:02.726 "buf_count": 2048 00:24:02.726 } 00:24:02.726 } 00:24:02.726 ] 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "subsystem": "bdev", 00:24:02.726 "config": [ 00:24:02.726 { 00:24:02.726 "method": "bdev_set_options", 00:24:02.726 "params": { 00:24:02.726 "bdev_io_pool_size": 65535, 00:24:02.726 "bdev_io_cache_size": 256, 00:24:02.726 "bdev_auto_examine": true, 00:24:02.726 "iobuf_small_cache_size": 128, 00:24:02.726 "iobuf_large_cache_size": 16 00:24:02.726 } 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "method": "bdev_raid_set_options", 00:24:02.726 "params": { 00:24:02.726 "process_window_size_kb": 1024, 00:24:02.726 "process_max_bandwidth_mb_sec": 0 00:24:02.726 } 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "method": "bdev_iscsi_set_options", 00:24:02.726 "params": { 00:24:02.726 "timeout_sec": 30 00:24:02.726 } 00:24:02.726 }, 00:24:02.726 { 00:24:02.726 "method": "bdev_nvme_set_options", 00:24:02.726 "params": { 00:24:02.726 "action_on_timeout": "none", 00:24:02.727 "timeout_us": 0, 00:24:02.727 "timeout_admin_us": 0, 00:24:02.727 "keep_alive_timeout_ms": 10000, 00:24:02.727 "arbitration_burst": 0, 00:24:02.727 "low_priority_weight": 0, 00:24:02.727 "medium_priority_weight": 0, 00:24:02.727 "high_priority_weight": 0, 00:24:02.727 "nvme_adminq_poll_period_us": 10000, 00:24:02.727 "nvme_ioq_poll_period_us": 0, 00:24:02.727 "io_queue_requests": 512, 00:24:02.727 "delay_cmd_submit": true, 00:24:02.727 "transport_retry_count": 4, 00:24:02.727 "bdev_retry_count": 3, 00:24:02.727 "transport_ack_timeout": 0, 00:24:02.727 "ctrlr_loss_timeout_sec": 0, 00:24:02.727 "reconnect_delay_sec": 0, 00:24:02.727 "fast_io_fail_timeout_sec": 0, 00:24:02.727 "disable_auto_failback": false, 00:24:02.727 "generate_uuids": false, 00:24:02.727 "transport_tos": 0, 00:24:02.727 "nvme_error_stat": false, 00:24:02.727 "rdma_srq_size": 0, 00:24:02.727 "io_path_stat": false, 00:24:02.727 "allow_accel_sequence": false, 00:24:02.727 "rdma_max_cq_size": 0, 00:24:02.727 "rdma_cm_event_timeout_ms": 0, 00:24:02.727 "dhchap_digests": [ 00:24:02.727 "sha256", 00:24:02.727 "sha384", 00:24:02.727 "sha512" 00:24:02.727 ], 00:24:02.727 "dhchap_dhgroups": [ 00:24:02.727 "null", 00:24:02.727 "ffdhe2048", 00:24:02.727 "ffdhe3072", 00:24:02.727 "ffdhe4096", 00:24:02.727 "ffdhe6144", 00:24:02.727 "ffdhe8192" 00:24:02.727 ] 00:24:02.727 } 00:24:02.727 }, 00:24:02.727 { 00:24:02.727 "method": "bdev_nvme_attach_controller", 00:24:02.727 "params": { 00:24:02.727 "name": "nvme0", 00:24:02.727 "trtype": "TCP", 00:24:02.727 "adrfam": "IPv4", 00:24:02.727 "traddr": "10.0.0.2", 00:24:02.727 "trsvcid": "4420", 00:24:02.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.727 "prchk_reftag": false, 00:24:02.727 "prchk_guard": false, 00:24:02.727 "ctrlr_loss_timeout_sec": 0, 00:24:02.727 "reconnect_delay_sec": 0, 00:24:02.727 "fast_io_fail_timeout_sec": 0, 00:24:02.727 "psk": "key0", 00:24:02.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.727 "hdgst": false, 00:24:02.727 "ddgst": false 00:24:02.727 } 00:24:02.727 }, 00:24:02.727 { 00:24:02.727 "method": "bdev_nvme_set_hotplug", 00:24:02.727 "params": { 00:24:02.727 "period_us": 100000, 00:24:02.727 "enable": false 00:24:02.727 } 00:24:02.727 }, 00:24:02.727 { 00:24:02.727 "method": "bdev_enable_histogram", 00:24:02.727 "params": { 00:24:02.727 "name": "nvme0n1", 00:24:02.727 "enable": true 00:24:02.727 } 00:24:02.727 }, 00:24:02.727 { 00:24:02.727 "method": "bdev_wait_for_examine" 00:24:02.727 } 00:24:02.727 ] 00:24:02.727 }, 00:24:02.727 { 00:24:02.727 "subsystem": "nbd", 00:24:02.727 "config": [] 00:24:02.727 } 00:24:02.727 ] 00:24:02.727 }' 00:24:02.727 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.727 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.727 [2024-07-23 18:12:10.349455] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:24:02.727 [2024-07-23 18:12:10.349528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392518 ] 00:24:02.727 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.985 [2024-07-23 18:12:10.409685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.985 [2024-07-23 18:12:10.493895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.243 [2024-07-23 18:12:10.669732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.832 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.832 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:03.832 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.832 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:24:04.090 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.090 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.090 Running I/O for 1 seconds... 00:24:05.466 00:24:05.466 Latency(us) 00:24:05.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:05.466 Verification LBA range: start 0x0 length 0x2000 00:24:05.466 nvme0n1 : 1.02 3501.07 13.68 0.00 0.00 36172.68 7864.32 32816.55 00:24:05.466 =================================================================================================================== 00:24:05.466 Total : 3501.07 13.68 0.00 0.00 36172.68 7864.32 32816.55 00:24:05.466 0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.466 nvmf_trace.0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2392518 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2392518 ']' 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2392518 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2392518 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2392518' 00:24:05.466 killing process with pid 2392518 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2392518 00:24:05.466 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.466 00:24:05.466 Latency(us) 00:24:05.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.466 =================================================================================================================== 00:24:05.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.466 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2392518 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.466 rmmod nvme_tcp 00:24:05.466 rmmod nvme_fabrics 00:24:05.466 rmmod nvme_keyring 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2392360 ']' 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2392360 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2392360 ']' 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2392360 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2392360 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2392360' 00:24:05.466 killing process with pid 2392360 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2392360 00:24:05.466 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2392360 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.724 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Sp0FWSGvPT /tmp/tmp.sviq3gc79E /tmp/tmp.SaKNZve6D3 00:24:08.262 00:24:08.262 real 1m18.690s 00:24:08.262 user 2m5.014s 00:24:08.262 sys 0m26.118s 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.262 ************************************ 00:24:08.262 END TEST nvmf_tls 00:24:08.262 ************************************ 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.262 ************************************ 00:24:08.262 START TEST nvmf_fips 00:24:08.262 ************************************ 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.262 * Looking for test storage... 00:24:08.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.262 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:08.263 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:08.264 Error setting digest 00:24:08.264 00A22D67E77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:08.264 00A22D67E77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.264 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.166 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:10.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:10.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:10.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:10.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.167 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:24:10.425 00:24:10.425 --- 10.0.0.2 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:24:10.425 00:24:10.425 --- 10.0.0.1 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.425 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2394763 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2394763 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2394763 ']' 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.426 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.426 [2024-07-23 18:12:17.978788] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:24:10.426 [2024-07-23 18:12:17.978888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.426 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.426 [2024-07-23 18:12:18.043750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.684 [2024-07-23 18:12:18.125494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.684 [2024-07-23 18:12:18.125553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.684 [2024-07-23 18:12:18.125580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.684 [2024-07-23 18:12:18.125592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.684 [2024-07-23 18:12:18.125602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.684 [2024-07-23 18:12:18.125626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:10.684 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.942 [2024-07-23 18:12:18.538122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.942 [2024-07-23 18:12:18.554105] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.942 [2024-07-23 18:12:18.554409] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.942 [2024-07-23 18:12:18.585259] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:10.942 malloc0 00:24:11.200 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2394914 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2394914 /var/tmp/bdevperf.sock 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2394914 ']' 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.201 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.201 [2024-07-23 18:12:18.676502] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:24:11.201 [2024-07-23 18:12:18.676591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394914 ] 00:24:11.201 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.201 [2024-07-23 18:12:18.734796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.201 [2024-07-23 18:12:18.820782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.459 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.459 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:11.459 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.716 [2024-07-23 18:12:19.149830] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.716 [2024-07-23 18:12:19.149968] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.716 TLSTESTn1 00:24:11.716 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.716 Running I/O for 10 seconds... 00:24:23.908 00:24:23.908 Latency(us) 00:24:23.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.908 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.908 Verification LBA range: start 0x0 length 0x2000 00:24:23.908 TLSTESTn1 : 10.02 3393.52 13.26 0.00 0.00 37652.16 9126.49 35535.08 00:24:23.908 =================================================================================================================== 00:24:23.908 Total : 3393.52 13.26 0.00 0.00 37652.16 9126.49 35535.08 00:24:23.908 0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:23.908 nvmf_trace.0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2394914 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2394914 ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2394914 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2394914 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2394914' 00:24:23.908 killing process with pid 2394914 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2394914 00:24:23.908 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.908 00:24:23.908 Latency(us) 00:24:23.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.908 =================================================================================================================== 00:24:23.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.908 [2024-07-23 18:12:29.502195] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2394914 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.908 rmmod nvme_tcp 00:24:23.908 rmmod nvme_fabrics 00:24:23.908 rmmod nvme_keyring 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2394763 ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2394763 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2394763 ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2394763 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2394763 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2394763' 00:24:23.908 killing process with pid 2394763 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2394763 00:24:23.908 [2024-07-23 18:12:29.817811] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:23.908 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2394763 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.908 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:24.478 00:24:24.478 real 0m16.653s 00:24:24.478 user 0m21.145s 00:24:24.478 sys 0m5.538s 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:24.478 ************************************ 00:24:24.478 END TEST nvmf_fips 00:24:24.478 ************************************ 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.478 18:12:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:24.736 ************************************ 00:24:24.736 START TEST nvmf_fuzz 00:24:24.736 ************************************ 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:24.736 * Looking for test storage... 00:24:24.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.736 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:26.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:26.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:26.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:26.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.635 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.636 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.636 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.636 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.636 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:24:26.893 00:24:26.893 --- 10.0.0.2 ping statistics --- 00:24:26.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.893 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:24:26.893 00:24:26.893 --- 10.0.0.1 ping statistics --- 00:24:26.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.893 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.893 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2398153 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2398153 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2398153 ']' 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.894 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 Malloc0 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:27.151 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:59.352 Fuzzing completed. Shutting down the fuzz application 00:24:59.352 00:24:59.352 Dumping successful admin opcodes: 00:24:59.352 8, 9, 10, 24, 00:24:59.352 Dumping successful io opcodes: 00:24:59.352 0, 9, 00:24:59.352 NS: 0x200003aeff00 I/O qp, Total commands completed: 523661, total successful commands: 3041, random_seed: 1719341952 00:24:59.352 NS: 0x200003aeff00 admin qp, Total commands completed: 62992, total successful commands: 496, random_seed: 252250048 00:24:59.352 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:59.352 Fuzzing completed. Shutting down the fuzz application 00:24:59.352 00:24:59.352 Dumping successful admin opcodes: 00:24:59.352 24, 00:24:59.352 Dumping successful io opcodes: 00:24:59.352 00:24:59.352 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1207256282 00:24:59.352 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1207371728 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.352 rmmod nvme_tcp 00:24:59.352 rmmod nvme_fabrics 00:24:59.352 rmmod nvme_keyring 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2398153 ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2398153 ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2398153' 00:24:59.352 killing process with pid 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2398153 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.352 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:01.256 00:25:01.256 real 0m36.727s 00:25:01.256 user 0m51.541s 00:25:01.256 sys 0m14.485s 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.256 ************************************ 00:25:01.256 END TEST nvmf_fuzz 00:25:01.256 ************************************ 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.256 18:13:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:01.514 ************************************ 00:25:01.514 START TEST nvmf_multiconnection 00:25:01.514 ************************************ 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:01.514 * Looking for test storage... 00:25:01.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.514 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.515 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.515 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.418 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:03.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:03.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:03.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:03.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.419 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.419 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:25:03.678 00:25:03.678 --- 10.0.0.2 ping statistics --- 00:25:03.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.678 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:25:03.678 00:25:03.678 --- 10.0.0.1 ping statistics --- 00:25:03.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.678 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2403765 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2403765 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2403765 ']' 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.678 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.678 [2024-07-23 18:13:11.195551] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:25:03.678 [2024-07-23 18:13:11.195644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.678 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.678 [2024-07-23 18:13:11.261778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.938 [2024-07-23 18:13:11.352986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.938 [2024-07-23 18:13:11.353030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.938 [2024-07-23 18:13:11.353059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.938 [2024-07-23 18:13:11.353070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.938 [2024-07-23 18:13:11.353080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.938 [2024-07-23 18:13:11.353158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.938 [2024-07-23 18:13:11.353224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.938 [2024-07-23 18:13:11.353246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.938 [2024-07-23 18:13:11.353249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 [2024-07-23 18:13:11.504988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 Malloc1 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.938 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 [2024-07-23 18:13:11.561979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 Malloc2 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.939 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc3 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc4 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc5 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 Malloc6 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:04.197 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.198 Malloc7 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.198 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 Malloc8 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.455 Malloc9 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:04.455 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 Malloc10 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 Malloc11 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.456 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:05.389 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:05.389 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.389 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.389 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:05.389 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.282 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:07.847 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:07.847 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.847 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.847 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.847 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.744 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.744 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.744 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:10.001 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:10.001 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.001 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:10.001 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.001 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:10.567 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:10.567 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.567 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.567 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.567 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.464 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:13.397 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:13.397 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.397 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.397 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.397 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.295 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:16.227 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:16.227 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.227 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.227 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.227 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.122 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:18.684 18:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:18.684 18:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.684 18:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.684 18:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.684 18:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.236 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:21.493 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:21.493 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.493 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.493 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.493 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.019 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:24.277 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:24.277 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.277 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.277 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.277 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.173 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:27.105 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:27.105 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.105 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.105 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.105 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.003 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:29.936 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:29.936 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:29.936 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.936 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:29.936 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.833 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.833 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.833 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:32.090 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:32.090 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.090 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:32.090 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.090 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:33.024 18:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:33.024 18:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.024 18:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.024 18:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:33.024 18:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.922 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:34.922 [global] 00:25:34.922 thread=1 00:25:34.922 invalidate=1 00:25:34.922 rw=read 00:25:34.922 time_based=1 00:25:34.922 runtime=10 00:25:34.922 ioengine=libaio 00:25:34.922 direct=1 00:25:34.922 bs=262144 00:25:34.922 iodepth=64 00:25:34.922 norandommap=1 00:25:34.922 numjobs=1 00:25:34.922 00:25:34.922 [job0] 00:25:34.922 filename=/dev/nvme0n1 00:25:34.922 [job1] 00:25:34.922 filename=/dev/nvme10n1 00:25:34.922 [job2] 00:25:34.922 filename=/dev/nvme1n1 00:25:34.922 [job3] 00:25:34.922 filename=/dev/nvme2n1 00:25:34.922 [job4] 00:25:34.922 filename=/dev/nvme3n1 00:25:34.922 [job5] 00:25:34.922 filename=/dev/nvme4n1 00:25:34.922 [job6] 00:25:34.922 filename=/dev/nvme5n1 00:25:34.922 [job7] 00:25:34.922 filename=/dev/nvme6n1 00:25:34.922 [job8] 00:25:34.922 filename=/dev/nvme7n1 00:25:34.922 [job9] 00:25:34.922 filename=/dev/nvme8n1 00:25:34.922 [job10] 00:25:34.922 filename=/dev/nvme9n1 00:25:34.922 Could not set queue depth (nvme0n1) 00:25:34.922 Could not set queue depth (nvme10n1) 00:25:34.922 Could not set queue depth (nvme1n1) 00:25:34.922 Could not set queue depth (nvme2n1) 00:25:34.922 Could not set queue depth (nvme3n1) 00:25:34.922 Could not set queue depth (nvme4n1) 00:25:34.922 Could not set queue depth (nvme5n1) 00:25:34.922 Could not set queue depth (nvme6n1) 00:25:34.922 Could not set queue depth (nvme7n1) 00:25:34.922 Could not set queue depth (nvme8n1) 00:25:34.922 Could not set queue depth (nvme9n1) 00:25:35.180 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.180 fio-3.35 00:25:35.180 Starting 11 threads 00:25:47.382 00:25:47.382 job0: (groupid=0, jobs=1): err= 0: pid=2407907: Tue Jul 23 18:13:53 2024 00:25:47.382 read: IOPS=581, BW=145MiB/s (152MB/s)(1470MiB/10111msec) 00:25:47.382 slat (usec): min=9, max=70477, avg=1204.02, stdev=4423.15 00:25:47.382 clat (usec): min=672, max=236374, avg=108804.64, stdev=45831.34 00:25:47.382 lat (usec): min=694, max=237496, avg=110008.66, stdev=46425.84 00:25:47.382 clat percentiles (msec): 00:25:47.382 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 47], 20.00th=[ 65], 00:25:47.382 | 30.00th=[ 80], 40.00th=[ 101], 50.00th=[ 115], 60.00th=[ 131], 00:25:47.382 | 70.00th=[ 140], 80.00th=[ 148], 90.00th=[ 163], 95.00th=[ 176], 00:25:47.382 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 230], 99.95th=[ 236], 00:25:47.382 | 99.99th=[ 236] 00:25:47.382 bw ( KiB/s): min=87552, max=251392, per=7.43%, avg=148839.30, stdev=54304.66, samples=20 00:25:47.382 iops : min= 342, max= 982, avg=581.35, stdev=212.14, samples=20 00:25:47.382 lat (usec) : 750=0.05%, 1000=0.07% 00:25:47.382 lat (msec) : 2=0.02%, 4=0.03%, 10=1.14%, 20=1.91%, 50=7.91% 00:25:47.382 lat (msec) : 100=28.26%, 250=60.62% 00:25:47.382 cpu : usr=0.30%, sys=1.83%, ctx=1329, majf=0, minf=4097 00:25:47.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:47.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.382 issued rwts: total=5878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.382 job1: (groupid=0, jobs=1): err= 0: pid=2407921: Tue Jul 23 18:13:53 2024 00:25:47.382 read: IOPS=525, BW=131MiB/s (138MB/s)(1329MiB/10115msec) 00:25:47.382 slat (usec): min=9, max=80040, avg=1648.54, stdev=4976.32 00:25:47.382 clat (msec): min=3, max=244, avg=120.02, stdev=33.31 00:25:47.382 lat (msec): min=3, max=244, avg=121.67, stdev=33.93 00:25:47.382 clat percentiles (msec): 00:25:47.382 | 1.00th=[ 28], 5.00th=[ 64], 10.00th=[ 79], 20.00th=[ 95], 00:25:47.382 | 30.00th=[ 105], 40.00th=[ 112], 50.00th=[ 121], 60.00th=[ 131], 00:25:47.382 | 70.00th=[ 140], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 169], 00:25:47.382 | 99.00th=[ 199], 99.50th=[ 205], 99.90th=[ 236], 99.95th=[ 245], 00:25:47.382 | 99.99th=[ 245] 00:25:47.382 bw ( KiB/s): min=90112, max=207872, per=6.71%, avg=134448.30, stdev=28415.69, samples=20 00:25:47.382 iops : min= 352, max= 812, avg=525.10, stdev=110.97, samples=20 00:25:47.382 lat (msec) : 4=0.04%, 10=0.06%, 20=0.56%, 50=1.73%, 100=22.93% 00:25:47.382 lat (msec) : 250=74.68% 00:25:47.382 cpu : usr=0.26%, sys=1.87%, ctx=1167, majf=0, minf=4097 00:25:47.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:47.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.382 issued rwts: total=5316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.382 job2: (groupid=0, jobs=1): err= 0: pid=2407962: Tue Jul 23 18:13:53 2024 00:25:47.382 read: IOPS=1074, BW=269MiB/s (282MB/s)(2688MiB/10010msec) 00:25:47.382 slat (usec): min=9, max=61995, avg=591.84, stdev=2505.21 00:25:47.382 clat (msec): min=2, max=218, avg=58.96, stdev=39.91 00:25:47.382 lat (msec): min=2, max=218, avg=59.55, stdev=40.15 00:25:47.382 clat percentiles (msec): 00:25:47.382 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 29], 00:25:47.382 | 30.00th=[ 30], 40.00th=[ 33], 50.00th=[ 42], 60.00th=[ 52], 00:25:47.382 | 70.00th=[ 67], 80.00th=[ 102], 90.00th=[ 127], 95.00th=[ 142], 00:25:47.382 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 213], 00:25:47.382 | 99.99th=[ 220] 00:25:47.382 bw ( KiB/s): min=117760, max=555008, per=13.48%, avg=270173.00, stdev=145397.43, samples=19 00:25:47.382 iops : min= 460, max= 2168, avg=1055.26, stdev=567.92, samples=19 00:25:47.382 lat (msec) : 4=0.09%, 10=0.95%, 20=2.51%, 50=54.28%, 100=21.64% 00:25:47.382 lat (msec) : 250=20.53% 00:25:47.382 cpu : usr=0.40%, sys=3.21%, ctx=2096, majf=0, minf=4097 00:25:47.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:47.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.382 issued rwts: total=10751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.382 job3: (groupid=0, jobs=1): err= 0: pid=2407979: Tue Jul 23 18:13:53 2024 00:25:47.382 read: IOPS=642, BW=161MiB/s (169MB/s)(1626MiB/10112msec) 00:25:47.382 slat (usec): min=13, max=83084, avg=1532.83, stdev=4607.29 00:25:47.382 clat (msec): min=25, max=240, avg=97.91, stdev=47.06 00:25:47.382 lat (msec): min=25, max=240, avg=99.45, stdev=47.80 00:25:47.382 clat percentiles (msec): 00:25:47.382 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 53], 00:25:47.382 | 30.00th=[ 63], 40.00th=[ 75], 50.00th=[ 94], 60.00th=[ 114], 00:25:47.382 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 174], 00:25:47.382 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 234], 99.95th=[ 241], 00:25:47.382 | 99.99th=[ 241] 00:25:47.382 bw ( KiB/s): min=93184, max=402432, per=8.22%, avg=164812.80, stdev=85331.00, samples=20 00:25:47.382 iops : min= 364, max= 1572, avg=643.75, stdev=333.34, samples=20 00:25:47.382 lat (msec) : 50=18.15%, 100=34.53%, 250=47.32% 00:25:47.382 cpu : usr=0.33%, sys=2.16%, ctx=1223, majf=0, minf=4097 00:25:47.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:47.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.382 issued rwts: total=6502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.382 job4: (groupid=0, jobs=1): err= 0: pid=2407991: Tue Jul 23 18:13:53 2024 00:25:47.382 read: IOPS=739, BW=185MiB/s (194MB/s)(1870MiB/10110msec) 00:25:47.382 slat (usec): min=9, max=61335, avg=906.91, stdev=3369.75 00:25:47.382 clat (msec): min=2, max=199, avg=85.53, stdev=34.06 00:25:47.382 lat (msec): min=2, max=219, avg=86.43, stdev=34.43 00:25:47.382 clat percentiles (msec): 00:25:47.382 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 59], 00:25:47.382 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 94], 00:25:47.382 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 128], 95.00th=[ 150], 00:25:47.382 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 201], 00:25:47.382 | 99.99th=[ 201] 00:25:47.382 bw ( KiB/s): min=109056, max=262656, per=9.47%, avg=189828.90, stdev=42098.50, samples=20 00:25:47.382 iops : min= 426, max= 1026, avg=741.50, stdev=164.43, samples=20 00:25:47.382 lat (msec) : 4=0.74%, 10=1.20%, 20=1.87%, 50=8.97%, 100=54.21% 00:25:47.382 lat (msec) : 250=33.01% 00:25:47.382 cpu : usr=0.34%, sys=2.16%, ctx=1521, majf=0, minf=4097 00:25:47.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:47.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.382 issued rwts: total=7480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job5: (groupid=0, jobs=1): err= 0: pid=2408032: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=741, BW=185MiB/s (194MB/s)(1865MiB/10061msec) 00:25:47.383 slat (usec): min=8, max=40062, avg=1140.78, stdev=3676.31 00:25:47.383 clat (msec): min=7, max=219, avg=85.11, stdev=32.14 00:25:47.383 lat (msec): min=7, max=223, avg=86.25, stdev=32.70 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 58], 00:25:47.383 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 92], 00:25:47.383 | 70.00th=[ 102], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 144], 00:25:47.383 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 211], 99.95th=[ 218], 00:25:47.383 | 99.99th=[ 220] 00:25:47.383 bw ( KiB/s): min=95232, max=281037, per=9.45%, avg=189322.15, stdev=53772.25, samples=20 00:25:47.383 iops : min= 372, max= 1097, avg=739.45, stdev=210.03, samples=20 00:25:47.383 lat (msec) : 10=0.12%, 20=1.03%, 50=12.37%, 100=55.40%, 250=31.07% 00:25:47.383 cpu : usr=0.54%, sys=2.08%, ctx=1333, majf=0, minf=4097 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=7460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job6: (groupid=0, jobs=1): err= 0: pid=2408037: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=759, BW=190MiB/s (199MB/s)(1927MiB/10142msec) 00:25:47.383 slat (usec): min=9, max=86310, avg=1058.11, stdev=3934.04 00:25:47.383 clat (usec): min=1652, max=270852, avg=83107.70, stdev=44669.88 00:25:47.383 lat (usec): min=1664, max=272845, avg=84165.82, stdev=45327.09 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 45], 00:25:47.383 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 73], 60.00th=[ 85], 00:25:47.383 | 70.00th=[ 103], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 163], 00:25:47.383 | 99.00th=[ 192], 99.50th=[ 211], 99.90th=[ 241], 99.95th=[ 245], 00:25:47.383 | 99.99th=[ 271] 00:25:47.383 bw ( KiB/s): min=94720, max=357376, per=9.76%, avg=195639.35, stdev=77645.05, samples=20 00:25:47.383 iops : min= 370, max= 1396, avg=764.20, stdev=303.31, samples=20 00:25:47.383 lat (msec) : 2=0.03%, 4=0.31%, 10=1.03%, 20=2.15%, 50=24.32% 00:25:47.383 lat (msec) : 100=41.29%, 250=30.86%, 500=0.01% 00:25:47.383 cpu : usr=0.47%, sys=2.43%, ctx=1413, majf=0, minf=4097 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=7706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job7: (groupid=0, jobs=1): err= 0: pid=2408038: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=579, BW=145MiB/s (152MB/s)(1465MiB/10114msec) 00:25:47.383 slat (usec): min=9, max=131083, avg=1211.77, stdev=4906.08 00:25:47.383 clat (usec): min=911, max=248355, avg=109149.99, stdev=53165.29 00:25:47.383 lat (usec): min=931, max=266578, avg=110361.76, stdev=53951.82 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 20], 20.00th=[ 59], 00:25:47.383 | 30.00th=[ 90], 40.00th=[ 108], 50.00th=[ 121], 60.00th=[ 133], 00:25:47.383 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 180], 00:25:47.383 | 99.00th=[ 222], 99.50th=[ 241], 99.90th=[ 243], 99.95th=[ 243], 00:25:47.383 | 99.99th=[ 249] 00:25:47.383 bw ( KiB/s): min=102912, max=206848, per=7.41%, avg=148399.65, stdev=33515.29, samples=20 00:25:47.383 iops : min= 402, max= 808, avg=579.60, stdev=130.93, samples=20 00:25:47.383 lat (usec) : 1000=0.03% 00:25:47.383 lat (msec) : 2=0.31%, 4=1.18%, 10=4.71%, 20=4.38%, 50=7.71% 00:25:47.383 lat (msec) : 100=15.63%, 250=66.05% 00:25:47.383 cpu : usr=0.38%, sys=1.83%, ctx=1368, majf=0, minf=4097 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=5861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job8: (groupid=0, jobs=1): err= 0: pid=2408039: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=979, BW=245MiB/s (257MB/s)(2462MiB/10057msec) 00:25:47.383 slat (usec): min=9, max=78799, avg=933.56, stdev=3207.69 00:25:47.383 clat (msec): min=2, max=246, avg=64.37, stdev=37.11 00:25:47.383 lat (msec): min=2, max=246, avg=65.30, stdev=37.62 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 31], 00:25:47.383 | 30.00th=[ 36], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 66], 00:25:47.383 | 70.00th=[ 75], 80.00th=[ 89], 90.00th=[ 124], 95.00th=[ 144], 00:25:47.383 | 99.00th=[ 167], 99.50th=[ 188], 99.90th=[ 207], 99.95th=[ 211], 00:25:47.383 | 99.99th=[ 247] 00:25:47.383 bw ( KiB/s): min=98304, max=463872, per=12.50%, avg=250468.40, stdev=107951.18, samples=20 00:25:47.383 iops : min= 384, max= 1812, avg=978.30, stdev=421.67, samples=20 00:25:47.383 lat (msec) : 4=0.05%, 10=0.67%, 20=3.43%, 50=35.01%, 100=45.34% 00:25:47.383 lat (msec) : 250=15.49% 00:25:47.383 cpu : usr=0.67%, sys=2.83%, ctx=1767, majf=0, minf=3723 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=9849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job9: (groupid=0, jobs=1): err= 0: pid=2408040: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=657, BW=164MiB/s (172MB/s)(1653MiB/10055msec) 00:25:47.383 slat (usec): min=8, max=48423, avg=1047.54, stdev=3675.26 00:25:47.383 clat (usec): min=1277, max=209025, avg=96248.22, stdev=40849.21 00:25:47.383 lat (usec): min=1299, max=212375, avg=97295.76, stdev=41349.42 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 60], 00:25:47.383 | 30.00th=[ 69], 40.00th=[ 88], 50.00th=[ 100], 60.00th=[ 107], 00:25:47.383 | 70.00th=[ 114], 80.00th=[ 130], 90.00th=[ 155], 95.00th=[ 167], 00:25:47.383 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 205], 00:25:47.383 | 99.99th=[ 209] 00:25:47.383 bw ( KiB/s): min=90112, max=302499, per=8.36%, avg=167561.15, stdev=55855.61, samples=20 00:25:47.383 iops : min= 352, max= 1181, avg=654.50, stdev=218.11, samples=20 00:25:47.383 lat (msec) : 2=0.23%, 4=0.41%, 10=0.41%, 20=2.25%, 50=8.05% 00:25:47.383 lat (msec) : 100=39.17%, 250=49.49% 00:25:47.383 cpu : usr=0.31%, sys=2.08%, ctx=1500, majf=0, minf=4097 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=6610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 job10: (groupid=0, jobs=1): err= 0: pid=2408041: Tue Jul 23 18:13:53 2024 00:25:47.383 read: IOPS=593, BW=148MiB/s (156MB/s)(1494MiB/10062msec) 00:25:47.383 slat (usec): min=9, max=63170, avg=1389.31, stdev=4549.90 00:25:47.383 clat (msec): min=3, max=226, avg=106.33, stdev=38.27 00:25:47.383 lat (msec): min=3, max=226, avg=107.72, stdev=38.93 00:25:47.383 clat percentiles (msec): 00:25:47.383 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 80], 00:25:47.383 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 112], 00:25:47.383 | 70.00th=[ 125], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 167], 00:25:47.383 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 224], 99.95th=[ 226], 00:25:47.383 | 99.99th=[ 228] 00:25:47.383 bw ( KiB/s): min=91648, max=260608, per=7.55%, avg=151296.85, stdev=42719.21, samples=20 00:25:47.383 iops : min= 358, max= 1018, avg=591.00, stdev=166.87, samples=20 00:25:47.383 lat (msec) : 4=0.03%, 10=0.75%, 20=0.75%, 50=7.75%, 100=33.81% 00:25:47.383 lat (msec) : 250=56.90% 00:25:47.383 cpu : usr=0.33%, sys=2.06%, ctx=1252, majf=0, minf=4097 00:25:47.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:47.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.383 issued rwts: total=5974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.383 00:25:47.384 Run status group 0 (all jobs): 00:25:47.384 READ: bw=1957MiB/s (2052MB/s), 131MiB/s-269MiB/s (138MB/s-282MB/s), io=19.4GiB (20.8GB), run=10010-10142msec 00:25:47.384 00:25:47.384 Disk stats (read/write): 00:25:47.384 nvme0n1: ios=11572/0, merge=0/0, ticks=1236751/0, in_queue=1236751, util=97.04% 00:25:47.384 nvme10n1: ios=10430/0, merge=0/0, ticks=1228699/0, in_queue=1228699, util=97.26% 00:25:47.384 nvme1n1: ios=21087/0, merge=0/0, ticks=1244741/0, in_queue=1244741, util=97.57% 00:25:47.384 nvme2n1: ios=12826/0, merge=0/0, ticks=1230702/0, in_queue=1230702, util=97.74% 00:25:47.384 nvme3n1: ios=14654/0, merge=0/0, ticks=1240220/0, in_queue=1240220, util=97.83% 00:25:47.384 nvme4n1: ios=14682/0, merge=0/0, ticks=1237308/0, in_queue=1237308, util=98.20% 00:25:47.384 nvme5n1: ios=15163/0, merge=0/0, ticks=1232247/0, in_queue=1232247, util=98.37% 00:25:47.384 nvme6n1: ios=11520/0, merge=0/0, ticks=1232876/0, in_queue=1232876, util=98.49% 00:25:47.384 nvme7n1: ios=19428/0, merge=0/0, ticks=1235500/0, in_queue=1235500, util=98.90% 00:25:47.384 nvme8n1: ios=12911/0, merge=0/0, ticks=1242140/0, in_queue=1242140, util=99.08% 00:25:47.384 nvme9n1: ios=11704/0, merge=0/0, ticks=1232159/0, in_queue=1232159, util=99.21% 00:25:47.384 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:47.384 [global] 00:25:47.384 thread=1 00:25:47.384 invalidate=1 00:25:47.384 rw=randwrite 00:25:47.384 time_based=1 00:25:47.384 runtime=10 00:25:47.384 ioengine=libaio 00:25:47.384 direct=1 00:25:47.384 bs=262144 00:25:47.384 iodepth=64 00:25:47.384 norandommap=1 00:25:47.384 numjobs=1 00:25:47.384 00:25:47.384 [job0] 00:25:47.384 filename=/dev/nvme0n1 00:25:47.384 [job1] 00:25:47.384 filename=/dev/nvme10n1 00:25:47.384 [job2] 00:25:47.384 filename=/dev/nvme1n1 00:25:47.384 [job3] 00:25:47.384 filename=/dev/nvme2n1 00:25:47.384 [job4] 00:25:47.384 filename=/dev/nvme3n1 00:25:47.384 [job5] 00:25:47.384 filename=/dev/nvme4n1 00:25:47.384 [job6] 00:25:47.384 filename=/dev/nvme5n1 00:25:47.384 [job7] 00:25:47.384 filename=/dev/nvme6n1 00:25:47.384 [job8] 00:25:47.384 filename=/dev/nvme7n1 00:25:47.384 [job9] 00:25:47.384 filename=/dev/nvme8n1 00:25:47.384 [job10] 00:25:47.384 filename=/dev/nvme9n1 00:25:47.384 Could not set queue depth (nvme0n1) 00:25:47.384 Could not set queue depth (nvme10n1) 00:25:47.384 Could not set queue depth (nvme1n1) 00:25:47.384 Could not set queue depth (nvme2n1) 00:25:47.384 Could not set queue depth (nvme3n1) 00:25:47.384 Could not set queue depth (nvme4n1) 00:25:47.384 Could not set queue depth (nvme5n1) 00:25:47.384 Could not set queue depth (nvme6n1) 00:25:47.384 Could not set queue depth (nvme7n1) 00:25:47.384 Could not set queue depth (nvme8n1) 00:25:47.384 Could not set queue depth (nvme9n1) 00:25:47.384 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:47.384 fio-3.35 00:25:47.384 Starting 11 threads 00:25:57.370 00:25:57.370 job0: (groupid=0, jobs=1): err= 0: pid=2409061: Tue Jul 23 18:14:04 2024 00:25:57.370 write: IOPS=543, BW=136MiB/s (143MB/s)(1373MiB/10102msec); 0 zone resets 00:25:57.370 slat (usec): min=18, max=89925, avg=1306.09, stdev=4022.89 00:25:57.370 clat (usec): min=1004, max=402488, avg=116311.21, stdev=83915.73 00:25:57.370 lat (usec): min=1040, max=404437, avg=117617.30, stdev=85003.88 00:25:57.370 clat percentiles (msec): 00:25:57.370 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 46], 20.00th=[ 50], 00:25:57.370 | 30.00th=[ 53], 40.00th=[ 64], 50.00th=[ 88], 60.00th=[ 120], 00:25:57.370 | 70.00th=[ 138], 80.00th=[ 182], 90.00th=[ 264], 95.00th=[ 292], 00:25:57.370 | 99.00th=[ 363], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:25:57.370 | 99.99th=[ 401] 00:25:57.370 bw ( KiB/s): min=47104, max=314368, per=10.00%, avg=138982.40, stdev=81655.40, samples=20 00:25:57.370 iops : min= 184, max= 1228, avg=542.90, stdev=318.97, samples=20 00:25:57.370 lat (msec) : 2=0.07%, 4=0.22%, 10=1.38%, 20=1.66%, 50=18.63% 00:25:57.370 lat (msec) : 100=31.41%, 250=35.45%, 500=11.18% 00:25:57.370 cpu : usr=1.42%, sys=1.94%, ctx=2766, majf=0, minf=1 00:25:57.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.370 issued rwts: total=0,5492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.370 job1: (groupid=0, jobs=1): err= 0: pid=2409074: Tue Jul 23 18:14:04 2024 00:25:57.370 write: IOPS=531, BW=133MiB/s (139MB/s)(1339MiB/10076msec); 0 zone resets 00:25:57.370 slat (usec): min=16, max=105565, avg=1100.30, stdev=3622.86 00:25:57.370 clat (usec): min=808, max=394608, avg=119275.03, stdev=74707.14 00:25:57.370 lat (usec): min=839, max=394672, avg=120375.33, stdev=75330.40 00:25:57.370 clat percentiles (usec): 00:25:57.370 | 1.00th=[ 1844], 5.00th=[ 6456], 10.00th=[ 19268], 20.00th=[ 47449], 00:25:57.370 | 30.00th=[ 80217], 40.00th=[ 95945], 50.00th=[115868], 60.00th=[129500], 00:25:57.370 | 70.00th=[156238], 80.00th=[181404], 90.00th=[204473], 95.00th=[254804], 00:25:57.370 | 99.00th=[329253], 99.50th=[362808], 99.90th=[387974], 99.95th=[392168], 00:25:57.370 | 99.99th=[396362] 00:25:57.370 bw ( KiB/s): min=92672, max=222208, per=9.75%, avg=135475.20, stdev=34635.78, samples=20 00:25:57.370 iops : min= 362, max= 868, avg=529.20, stdev=135.30, samples=20 00:25:57.370 lat (usec) : 1000=0.24% 00:25:57.370 lat (msec) : 2=0.97%, 4=2.02%, 10=3.40%, 20=3.49%, 50=10.23% 00:25:57.370 lat (msec) : 100=21.96%, 250=52.49%, 500=5.19% 00:25:57.370 cpu : usr=1.41%, sys=1.89%, ctx=3314, majf=0, minf=1 00:25:57.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.370 issued rwts: total=0,5355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.370 job2: (groupid=0, jobs=1): err= 0: pid=2409076: Tue Jul 23 18:14:04 2024 00:25:57.370 write: IOPS=519, BW=130MiB/s (136MB/s)(1322MiB/10170msec); 0 zone resets 00:25:57.370 slat (usec): min=18, max=66462, avg=1314.60, stdev=4003.65 00:25:57.370 clat (usec): min=1467, max=381020, avg=121712.65, stdev=82851.50 00:25:57.370 lat (usec): min=1589, max=381082, avg=123027.25, stdev=83997.47 00:25:57.370 clat percentiles (msec): 00:25:57.370 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 40], 20.00th=[ 45], 00:25:57.370 | 30.00th=[ 58], 40.00th=[ 79], 50.00th=[ 97], 60.00th=[ 126], 00:25:57.370 | 70.00th=[ 171], 80.00th=[ 203], 90.00th=[ 251], 95.00th=[ 279], 00:25:57.370 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 372], 99.95th=[ 372], 00:25:57.370 | 99.99th=[ 380] 00:25:57.370 bw ( KiB/s): min=57344, max=366592, per=9.63%, avg=133743.20, stdev=73375.91, samples=20 00:25:57.370 iops : min= 224, max= 1432, avg=522.40, stdev=286.65, samples=20 00:25:57.370 lat (msec) : 2=0.04%, 4=0.61%, 10=1.66%, 20=2.93%, 50=20.26% 00:25:57.370 lat (msec) : 100=25.48%, 250=38.70%, 500=10.33% 00:25:57.370 cpu : usr=1.59%, sys=1.82%, ctx=3029, majf=0, minf=1 00:25:57.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.370 issued rwts: total=0,5287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.370 job3: (groupid=0, jobs=1): err= 0: pid=2409077: Tue Jul 23 18:14:04 2024 00:25:57.370 write: IOPS=584, BW=146MiB/s (153MB/s)(1487MiB/10175msec); 0 zone resets 00:25:57.370 slat (usec): min=16, max=30482, avg=1016.77, stdev=2877.08 00:25:57.370 clat (usec): min=867, max=359037, avg=108430.92, stdev=59247.93 00:25:57.370 lat (usec): min=892, max=359092, avg=109447.69, stdev=59915.92 00:25:57.370 clat percentiles (msec): 00:25:57.370 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 49], 00:25:57.370 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 108], 60.00th=[ 124], 00:25:57.370 | 70.00th=[ 142], 80.00th=[ 165], 90.00th=[ 190], 95.00th=[ 205], 00:25:57.370 | 99.00th=[ 236], 99.50th=[ 284], 99.90th=[ 347], 99.95th=[ 347], 00:25:57.370 | 99.99th=[ 359] 00:25:57.370 bw ( KiB/s): min=81920, max=304640, per=10.84%, avg=150619.85, stdev=58316.06, samples=20 00:25:57.370 iops : min= 320, max= 1190, avg=588.35, stdev=227.80, samples=20 00:25:57.370 lat (usec) : 1000=0.05% 00:25:57.370 lat (msec) : 2=0.15%, 4=0.42%, 10=1.31%, 20=3.06%, 50=17.76% 00:25:57.370 lat (msec) : 100=24.06%, 250=52.45%, 500=0.74% 00:25:57.370 cpu : usr=1.55%, sys=1.81%, ctx=3331, majf=0, minf=1 00:25:57.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:57.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.370 issued rwts: total=0,5947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.370 job4: (groupid=0, jobs=1): err= 0: pid=2409078: Tue Jul 23 18:14:04 2024 00:25:57.370 write: IOPS=444, BW=111MiB/s (117MB/s)(1131MiB/10170msec); 0 zone resets 00:25:57.370 slat (usec): min=18, max=81339, avg=2128.63, stdev=4523.52 00:25:57.370 clat (usec): min=1129, max=362334, avg=141724.33, stdev=53369.81 00:25:57.370 lat (usec): min=1791, max=362377, avg=143852.96, stdev=54003.44 00:25:57.370 clat percentiles (msec): 00:25:57.370 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 77], 20.00th=[ 92], 00:25:57.370 | 30.00th=[ 118], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 161], 00:25:57.370 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 203], 95.00th=[ 211], 00:25:57.370 | 99.00th=[ 228], 99.50th=[ 288], 99.90th=[ 351], 99.95th=[ 351], 00:25:57.370 | 99.99th=[ 363] 00:25:57.370 bw ( KiB/s): min=79872, max=198656, per=8.22%, avg=114150.40, stdev=33778.31, samples=20 00:25:57.371 iops : min= 312, max= 776, avg=445.90, stdev=131.95, samples=20 00:25:57.371 lat (msec) : 2=0.09%, 4=0.46%, 10=2.21%, 20=1.79%, 50=1.13% 00:25:57.371 lat (msec) : 100=16.96%, 250=76.51%, 500=0.84% 00:25:57.371 cpu : usr=1.29%, sys=1.29%, ctx=1468, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,4522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job5: (groupid=0, jobs=1): err= 0: pid=2409079: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=437, BW=109MiB/s (115MB/s)(1105MiB/10089msec); 0 zone resets 00:25:57.371 slat (usec): min=16, max=22267, avg=1542.51, stdev=3962.19 00:25:57.371 clat (msec): min=2, max=307, avg=144.55, stdev=64.66 00:25:57.371 lat (msec): min=2, max=312, avg=146.10, stdev=65.59 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 58], 20.00th=[ 87], 00:25:57.371 | 30.00th=[ 105], 40.00th=[ 127], 50.00th=[ 155], 60.00th=[ 165], 00:25:57.371 | 70.00th=[ 184], 80.00th=[ 201], 90.00th=[ 222], 95.00th=[ 249], 00:25:57.371 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:25:57.371 | 99.99th=[ 309] 00:25:57.371 bw ( KiB/s): min=72192, max=192000, per=8.03%, avg=111488.00, stdev=32590.63, samples=20 00:25:57.371 iops : min= 282, max= 750, avg=435.50, stdev=127.31, samples=20 00:25:57.371 lat (msec) : 4=0.05%, 10=0.63%, 20=1.83%, 50=5.89%, 100=19.17% 00:25:57.371 lat (msec) : 250=67.72%, 500=4.71% 00:25:57.371 cpu : usr=1.49%, sys=1.50%, ctx=2544, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,4418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job6: (groupid=0, jobs=1): err= 0: pid=2409080: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=327, BW=81.8MiB/s (85.7MB/s)(826MiB/10100msec); 0 zone resets 00:25:57.371 slat (usec): min=26, max=51689, avg=2896.68, stdev=5826.65 00:25:57.371 clat (msec): min=13, max=366, avg=192.73, stdev=75.90 00:25:57.371 lat (msec): min=13, max=366, avg=195.63, stdev=77.03 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 33], 5.00th=[ 71], 10.00th=[ 78], 20.00th=[ 121], 00:25:57.371 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 199], 60.00th=[ 215], 00:25:57.371 | 70.00th=[ 239], 80.00th=[ 266], 90.00th=[ 288], 95.00th=[ 309], 00:25:57.371 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:25:57.371 | 99.99th=[ 368] 00:25:57.371 bw ( KiB/s): min=51200, max=194048, per=5.97%, avg=82944.00, stdev=35318.43, samples=20 00:25:57.371 iops : min= 200, max= 758, avg=324.00, stdev=137.96, samples=20 00:25:57.371 lat (msec) : 20=0.24%, 50=1.63%, 100=13.26%, 250=58.76%, 500=26.10% 00:25:57.371 cpu : usr=1.01%, sys=0.90%, ctx=1100, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,3303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job7: (groupid=0, jobs=1): err= 0: pid=2409081: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=440, BW=110MiB/s (115MB/s)(1118MiB/10157msec); 0 zone resets 00:25:57.371 slat (usec): min=22, max=76536, avg=1625.18, stdev=4654.89 00:25:57.371 clat (usec): min=1259, max=345233, avg=143626.19, stdev=84187.06 00:25:57.371 lat (usec): min=1351, max=345267, avg=145251.37, stdev=85396.29 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 23], 20.00th=[ 59], 00:25:57.371 | 30.00th=[ 100], 40.00th=[ 122], 50.00th=[ 142], 60.00th=[ 161], 00:25:57.371 | 70.00th=[ 182], 80.00th=[ 222], 90.00th=[ 264], 95.00th=[ 292], 00:25:57.371 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:25:57.371 | 99.99th=[ 347] 00:25:57.371 bw ( KiB/s): min=51200, max=245248, per=8.12%, avg=112856.90, stdev=48986.07, samples=20 00:25:57.371 iops : min= 200, max= 958, avg=440.80, stdev=191.34, samples=20 00:25:57.371 lat (msec) : 2=0.16%, 4=0.83%, 10=2.42%, 20=5.23%, 50=9.13% 00:25:57.371 lat (msec) : 100=12.41%, 250=56.21%, 500=13.62% 00:25:57.371 cpu : usr=1.21%, sys=1.64%, ctx=2545, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,4471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job8: (groupid=0, jobs=1): err= 0: pid=2409082: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=507, BW=127MiB/s (133MB/s)(1289MiB/10166msec); 0 zone resets 00:25:57.371 slat (usec): min=16, max=142433, avg=1132.20, stdev=4148.62 00:25:57.371 clat (usec): min=752, max=404981, avg=124996.32, stdev=86479.06 00:25:57.371 lat (usec): min=815, max=409089, avg=126128.52, stdev=87408.50 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 36], 00:25:57.371 | 30.00th=[ 71], 40.00th=[ 87], 50.00th=[ 125], 60.00th=[ 155], 00:25:57.371 | 70.00th=[ 167], 80.00th=[ 188], 90.00th=[ 241], 95.00th=[ 284], 00:25:57.371 | 99.00th=[ 372], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 405], 00:25:57.371 | 99.99th=[ 405] 00:25:57.371 bw ( KiB/s): min=69632, max=281600, per=9.39%, avg=130391.50, stdev=55124.64, samples=20 00:25:57.371 iops : min= 272, max= 1100, avg=509.30, stdev=215.35, samples=20 00:25:57.371 lat (usec) : 1000=0.12% 00:25:57.371 lat (msec) : 2=0.60%, 4=1.75%, 10=4.62%, 20=6.59%, 50=10.16% 00:25:57.371 lat (msec) : 100=19.86%, 250=47.73%, 500=8.57% 00:25:57.371 cpu : usr=1.50%, sys=1.62%, ctx=3415, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,5156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job9: (groupid=0, jobs=1): err= 0: pid=2409083: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=457, BW=114MiB/s (120MB/s)(1151MiB/10072msec); 0 zone resets 00:25:57.371 slat (usec): min=19, max=83758, avg=1631.47, stdev=4194.36 00:25:57.371 clat (usec): min=962, max=357154, avg=138104.73, stdev=63906.77 00:25:57.371 lat (usec): min=1000, max=357216, avg=139736.19, stdev=64682.79 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 59], 20.00th=[ 81], 00:25:57.371 | 30.00th=[ 105], 40.00th=[ 125], 50.00th=[ 140], 60.00th=[ 153], 00:25:57.371 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 207], 95.00th=[ 247], 00:25:57.371 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 351], 00:25:57.371 | 99.99th=[ 359] 00:25:57.371 bw ( KiB/s): min=47104, max=187392, per=8.37%, avg=116224.00, stdev=38508.53, samples=20 00:25:57.371 iops : min= 184, max= 732, avg=454.00, stdev=150.42, samples=20 00:25:57.371 lat (usec) : 1000=0.02% 00:25:57.371 lat (msec) : 2=0.17%, 4=0.74%, 10=0.72%, 20=2.61%, 50=3.95% 00:25:57.371 lat (msec) : 100=19.60%, 250=67.52%, 500=4.67% 00:25:57.371 cpu : usr=1.24%, sys=1.71%, ctx=2288, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,4603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 job10: (groupid=0, jobs=1): err= 0: pid=2409084: Tue Jul 23 18:14:04 2024 00:25:57.371 write: IOPS=661, BW=165MiB/s (173MB/s)(1666MiB/10071msec); 0 zone resets 00:25:57.371 slat (usec): min=18, max=150984, avg=1051.66, stdev=3680.55 00:25:57.371 clat (usec): min=900, max=451312, avg=95603.73, stdev=75183.02 00:25:57.371 lat (usec): min=948, max=455381, avg=96655.39, stdev=76048.10 00:25:57.371 clat percentiles (msec): 00:25:57.371 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 15], 20.00th=[ 35], 00:25:57.371 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 65], 60.00th=[ 91], 00:25:57.371 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 220], 00:25:57.371 | 99.00th=[ 309], 99.50th=[ 351], 99.90th=[ 426], 99.95th=[ 439], 00:25:57.371 | 99.99th=[ 451] 00:25:57.371 bw ( KiB/s): min=86016, max=340992, per=12.16%, avg=168969.70, stdev=84886.58, samples=20 00:25:57.371 iops : min= 336, max= 1332, avg=660.00, stdev=331.62, samples=20 00:25:57.371 lat (usec) : 1000=0.06% 00:25:57.371 lat (msec) : 2=0.57%, 4=1.50%, 10=4.85%, 20=7.38%, 50=21.78% 00:25:57.371 lat (msec) : 100=26.16%, 250=34.47%, 500=3.23% 00:25:57.371 cpu : usr=1.95%, sys=2.29%, ctx=3913, majf=0, minf=1 00:25:57.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:57.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.371 issued rwts: total=0,6663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.371 00:25:57.371 Run status group 0 (all jobs): 00:25:57.371 WRITE: bw=1357MiB/s (1423MB/s), 81.8MiB/s-165MiB/s (85.7MB/s-173MB/s), io=13.5GiB (14.5GB), run=10071-10175msec 00:25:57.371 00:25:57.371 Disk stats (read/write): 00:25:57.371 nvme0n1: ios=46/10788, merge=0/0, ticks=2410/1216010, in_queue=1218420, util=99.88% 00:25:57.371 nvme10n1: ios=39/10494, merge=0/0, ticks=39/1228135, in_queue=1228174, util=97.49% 00:25:57.371 nvme1n1: ios=33/10412, merge=0/0, ticks=813/1215759, in_queue=1216572, util=100.00% 00:25:57.372 nvme2n1: ios=47/11720, merge=0/0, ticks=122/1221974, in_queue=1222096, util=98.64% 00:25:57.372 nvme3n1: ios=46/8872, merge=0/0, ticks=2776/1198621, in_queue=1201397, util=100.00% 00:25:57.372 nvme4n1: ios=0/8625, merge=0/0, ticks=0/1222626, in_queue=1222626, util=98.13% 00:25:57.372 nvme5n1: ios=41/6401, merge=0/0, ticks=554/1208809, in_queue=1209363, util=100.00% 00:25:57.372 nvme6n1: ios=43/8724, merge=0/0, ticks=1176/1214426, in_queue=1215602, util=100.00% 00:25:57.372 nvme7n1: ios=0/10145, merge=0/0, ticks=0/1223551, in_queue=1223551, util=98.80% 00:25:57.372 nvme8n1: ios=34/8969, merge=0/0, ticks=1291/1216527, in_queue=1217818, util=100.00% 00:25:57.372 nvme9n1: ios=45/13091, merge=0/0, ticks=913/1222370, in_queue=1223283, util=100.00% 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:57.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:57.372 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.372 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:57.630 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.630 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:57.888 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.888 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:58.145 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:58.145 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.145 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.146 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:58.146 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.146 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:58.403 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.403 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:58.661 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:58.661 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.661 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:58.919 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:58.919 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.919 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.919 rmmod nvme_tcp 00:25:58.919 rmmod nvme_fabrics 00:25:58.919 rmmod nvme_keyring 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2403765 ']' 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2403765 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2403765 ']' 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2403765 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2403765 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2403765' 00:25:59.177 killing process with pid 2403765 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2403765 00:25:59.177 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2403765 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.744 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.646 00:26:01.646 real 1m0.220s 00:26:01.646 user 3m21.528s 00:26:01.646 sys 0m24.787s 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 ************************************ 00:26:01.646 END TEST nvmf_multiconnection 00:26:01.646 ************************************ 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:01.646 ************************************ 00:26:01.646 START TEST nvmf_initiator_timeout 00:26:01.646 ************************************ 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:01.646 * Looking for test storage... 00:26:01.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.646 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.647 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.178 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.179 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.179 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:26:04.179 00:26:04.179 --- 10.0.0.2 ping statistics --- 00:26:04.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.179 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:26:04.179 00:26:04.179 --- 10.0.0.1 ping statistics --- 00:26:04.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.179 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2412479 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2412479 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2412479 ']' 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.179 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.179 [2024-07-23 18:14:11.639815] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:26:04.179 [2024-07-23 18:14:11.639902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.179 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.179 [2024-07-23 18:14:11.705705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.179 [2024-07-23 18:14:11.791430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.179 [2024-07-23 18:14:11.791483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.179 [2024-07-23 18:14:11.791518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.179 [2024-07-23 18:14:11.791530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.179 [2024-07-23 18:14:11.791540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.179 [2024-07-23 18:14:11.791605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.179 [2024-07-23 18:14:11.791664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.179 [2024-07-23 18:14:11.791694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.179 [2024-07-23 18:14:11.791696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 Malloc0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 Delay0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 [2024-07-23 18:14:11.964986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.436 [2024-07-23 18:14:11.993252] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.436 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:05.367 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:05.367 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:05.367 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.367 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:05.368 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2412852 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:07.264 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:07.264 [global] 00:26:07.264 thread=1 00:26:07.264 invalidate=1 00:26:07.264 rw=write 00:26:07.264 time_based=1 00:26:07.264 runtime=60 00:26:07.264 ioengine=libaio 00:26:07.264 direct=1 00:26:07.264 bs=4096 00:26:07.264 iodepth=1 00:26:07.264 norandommap=0 00:26:07.264 numjobs=1 00:26:07.264 00:26:07.264 verify_dump=1 00:26:07.264 verify_backlog=512 00:26:07.264 verify_state_save=0 00:26:07.264 do_verify=1 00:26:07.264 verify=crc32c-intel 00:26:07.264 [job0] 00:26:07.264 filename=/dev/nvme0n1 00:26:07.264 Could not set queue depth (nvme0n1) 00:26:07.264 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:07.264 fio-3.35 00:26:07.264 Starting 1 thread 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:10.540 true 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:10.540 true 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:10.540 true 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:10.540 true 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.540 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.814 true 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.814 true 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.814 true 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.814 true 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:13.814 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2412852 00:27:10.072 00:27:10.072 job0: (groupid=0, jobs=1): err= 0: pid=2412921: Tue Jul 23 18:15:15 2024 00:27:10.072 read: IOPS=41, BW=167KiB/s (171kB/s)(9.79MiB/60026msec) 00:27:10.072 slat (usec): min=5, max=15638, avg=20.27, stdev=328.54 00:27:10.073 clat (usec): min=231, max=40838k, avg=23672.71, stdev=815614.59 00:27:10.073 lat (usec): min=239, max=40838k, avg=23692.98, stdev=815614.84 00:27:10.073 clat percentiles (usec): 00:27:10.073 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 00:27:10.073 | 20.00th=[ 258], 30.00th=[ 265], 40.00th=[ 269], 00:27:10.073 | 50.00th=[ 281], 60.00th=[ 293], 70.00th=[ 314], 00:27:10.073 | 80.00th=[ 523], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:10.073 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 44827], 00:27:10.073 | 99.95th=[ 44827], 99.99th=[17112761] 00:27:10.073 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60026msec); 0 zone resets 00:27:10.073 slat (usec): min=7, max=25861, avg=22.31, stdev=510.93 00:27:10.073 clat (usec): min=175, max=951, avg=214.18, stdev=45.51 00:27:10.073 lat (usec): min=185, max=26108, avg=236.50, stdev=513.74 00:27:10.073 clat percentiles (usec): 00:27:10.073 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:27:10.073 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:27:10.073 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 260], 00:27:10.073 | 99.00th=[ 306], 99.50th=[ 457], 99.90th=[ 840], 99.95th=[ 930], 00:27:10.073 | 99.99th=[ 955] 00:27:10.073 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6826.67, stdev=2364.83, samples=3 00:27:10.073 iops : min= 1024, max= 2048, avg=1706.67, stdev=591.21, samples=3 00:27:10.073 lat (usec) : 250=50.84%, 500=38.76%, 750=1.66%, 1000=0.12% 00:27:10.073 lat (msec) : 50=8.60%, >=2000=0.02% 00:27:10.073 cpu : usr=0.09%, sys=0.13%, ctx=5072, majf=0, minf=2 00:27:10.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.073 issued rwts: total=2507,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:10.073 00:27:10.073 Run status group 0 (all jobs): 00:27:10.073 READ: bw=167KiB/s (171kB/s), 167KiB/s-167KiB/s (171kB/s-171kB/s), io=9.79MiB (10.3MB), run=60026-60026msec 00:27:10.073 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60026-60026msec 00:27:10.073 00:27:10.073 Disk stats (read/write): 00:27:10.073 nvme0n1: ios=2555/2560, merge=0/0, ticks=19632/539, in_queue=20171, util=99.77% 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:10.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:10.073 nvmf hotplug test: fio successful as expected 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.073 rmmod nvme_tcp 00:27:10.073 rmmod nvme_fabrics 00:27:10.073 rmmod nvme_keyring 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2412479 ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2412479 ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2412479' 00:27:10.073 killing process with pid 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2412479 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.073 18:15:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.073 00:27:10.073 real 1m8.381s 00:27:10.073 user 4m10.695s 00:27:10.073 sys 0m6.814s 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.073 ************************************ 00:27:10.073 END TEST nvmf_initiator_timeout 00:27:10.073 ************************************ 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.073 18:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:12.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:12.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:12.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:12.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:12.608 ************************************ 00:27:12.608 START TEST nvmf_perf_adq 00:27:12.608 ************************************ 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:12.608 * Looking for test storage... 00:27:12.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:12.608 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.609 18:15:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:14.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:14.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:14.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:14.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:14.511 18:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:15.078 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:16.977 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.254 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:27:22.255 00:27:22.255 --- 10.0.0.2 ping statistics --- 00:27:22.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.255 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:27:22.255 00:27:22.255 --- 10.0.0.1 ping statistics --- 00:27:22.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.255 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2425066 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2425066 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2425066 ']' 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.255 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.255 [2024-07-23 18:15:29.600728] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:27:22.256 [2024-07-23 18:15:29.600813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.256 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.256 [2024-07-23 18:15:29.675179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.256 [2024-07-23 18:15:29.764890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.256 [2024-07-23 18:15:29.764953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.256 [2024-07-23 18:15:29.764981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.256 [2024-07-23 18:15:29.764992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.256 [2024-07-23 18:15:29.765002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.256 [2024-07-23 18:15:29.765092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.256 [2024-07-23 18:15:29.765161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.256 [2024-07-23 18:15:29.765190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.256 [2024-07-23 18:15:29.765192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.256 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.514 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.514 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:22.514 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.514 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.514 [2024-07-23 18:15:29.996339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.514 Malloc1 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.514 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.515 [2024-07-23 18:15:30.050181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.515 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.515 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2425192 00:27:22.515 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:22.515 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:22.515 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.413 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:24.413 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.413 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:24.671 "tick_rate": 2700000000, 00:27:24.671 "poll_groups": [ 00:27:24.671 { 00:27:24.671 "name": "nvmf_tgt_poll_group_000", 00:27:24.671 "admin_qpairs": 1, 00:27:24.671 "io_qpairs": 1, 00:27:24.671 "current_admin_qpairs": 1, 00:27:24.671 "current_io_qpairs": 1, 00:27:24.671 "pending_bdev_io": 0, 00:27:24.671 "completed_nvme_io": 20002, 00:27:24.671 "transports": [ 00:27:24.671 { 00:27:24.671 "trtype": "TCP" 00:27:24.671 } 00:27:24.671 ] 00:27:24.671 }, 00:27:24.671 { 00:27:24.671 "name": "nvmf_tgt_poll_group_001", 00:27:24.671 "admin_qpairs": 0, 00:27:24.671 "io_qpairs": 1, 00:27:24.671 "current_admin_qpairs": 0, 00:27:24.671 "current_io_qpairs": 1, 00:27:24.671 "pending_bdev_io": 0, 00:27:24.671 "completed_nvme_io": 20754, 00:27:24.671 "transports": [ 00:27:24.671 { 00:27:24.671 "trtype": "TCP" 00:27:24.671 } 00:27:24.671 ] 00:27:24.671 }, 00:27:24.671 { 00:27:24.671 "name": "nvmf_tgt_poll_group_002", 00:27:24.671 "admin_qpairs": 0, 00:27:24.671 "io_qpairs": 1, 00:27:24.671 "current_admin_qpairs": 0, 00:27:24.671 "current_io_qpairs": 1, 00:27:24.671 "pending_bdev_io": 0, 00:27:24.671 "completed_nvme_io": 21261, 00:27:24.671 "transports": [ 00:27:24.671 { 00:27:24.671 "trtype": "TCP" 00:27:24.671 } 00:27:24.671 ] 00:27:24.671 }, 00:27:24.671 { 00:27:24.671 "name": "nvmf_tgt_poll_group_003", 00:27:24.671 "admin_qpairs": 0, 00:27:24.671 "io_qpairs": 1, 00:27:24.671 "current_admin_qpairs": 0, 00:27:24.671 "current_io_qpairs": 1, 00:27:24.671 "pending_bdev_io": 0, 00:27:24.671 "completed_nvme_io": 20616, 00:27:24.671 "transports": [ 00:27:24.671 { 00:27:24.671 "trtype": "TCP" 00:27:24.671 } 00:27:24.671 ] 00:27:24.671 } 00:27:24.671 ] 00:27:24.671 }' 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:24.671 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2425192 00:27:32.773 Initializing NVMe Controllers 00:27:32.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:32.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:32.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:32.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:32.773 Initialization complete. Launching workers. 00:27:32.773 ======================================================== 00:27:32.773 Latency(us) 00:27:32.773 Device Information : IOPS MiB/s Average min max 00:27:32.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10786.30 42.13 5934.65 2051.46 8890.79 00:27:32.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10830.80 42.31 5909.41 2967.14 8574.28 00:27:32.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11152.40 43.56 5738.22 2578.42 8631.13 00:27:32.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10465.80 40.88 6117.20 2584.52 9471.61 00:27:32.773 ======================================================== 00:27:32.773 Total : 43235.30 168.89 5921.85 2051.46 9471.61 00:27:32.773 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.773 rmmod nvme_tcp 00:27:32.773 rmmod nvme_fabrics 00:27:32.773 rmmod nvme_keyring 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2425066 ']' 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2425066 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2425066 ']' 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2425066 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2425066 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2425066' 00:27:32.773 killing process with pid 2425066 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2425066 00:27:32.773 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2425066 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.031 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.935 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.935 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:34.935 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:35.871 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:37.770 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.078 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:43.079 00:27:43.079 --- 10.0.0.2 ping statistics --- 00:27:43.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.079 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:27:43.079 00:27:43.079 --- 10.0.0.1 ping statistics --- 00:27:43.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.079 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:43.079 net.core.busy_poll = 1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:43.079 net.core.busy_read = 1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2427798 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2427798 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2427798 ']' 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.079 [2024-07-23 18:15:50.493840] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:27:43.079 [2024-07-23 18:15:50.493927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.079 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.079 [2024-07-23 18:15:50.558369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.079 [2024-07-23 18:15:50.642644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.079 [2024-07-23 18:15:50.642701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.079 [2024-07-23 18:15:50.642722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.079 [2024-07-23 18:15:50.642733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.079 [2024-07-23 18:15:50.642742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.079 [2024-07-23 18:15:50.642800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.079 [2024-07-23 18:15:50.642859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.079 [2024-07-23 18:15:50.642924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.079 [2024-07-23 18:15:50.642926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.079 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 [2024-07-23 18:15:50.896979] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 Malloc1 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.338 [2024-07-23 18:15:50.950258] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2427837 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:43.338 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:43.338 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:45.866 "tick_rate": 2700000000, 00:27:45.866 "poll_groups": [ 00:27:45.866 { 00:27:45.866 "name": "nvmf_tgt_poll_group_000", 00:27:45.866 "admin_qpairs": 1, 00:27:45.866 "io_qpairs": 2, 00:27:45.866 "current_admin_qpairs": 1, 00:27:45.866 "current_io_qpairs": 2, 00:27:45.866 "pending_bdev_io": 0, 00:27:45.866 "completed_nvme_io": 26615, 00:27:45.866 "transports": [ 00:27:45.866 { 00:27:45.866 "trtype": "TCP" 00:27:45.866 } 00:27:45.866 ] 00:27:45.866 }, 00:27:45.866 { 00:27:45.866 "name": "nvmf_tgt_poll_group_001", 00:27:45.866 "admin_qpairs": 0, 00:27:45.866 "io_qpairs": 2, 00:27:45.866 "current_admin_qpairs": 0, 00:27:45.866 "current_io_qpairs": 2, 00:27:45.866 "pending_bdev_io": 0, 00:27:45.866 "completed_nvme_io": 26354, 00:27:45.866 "transports": [ 00:27:45.866 { 00:27:45.866 "trtype": "TCP" 00:27:45.866 } 00:27:45.866 ] 00:27:45.866 }, 00:27:45.866 { 00:27:45.866 "name": "nvmf_tgt_poll_group_002", 00:27:45.866 "admin_qpairs": 0, 00:27:45.866 "io_qpairs": 0, 00:27:45.866 "current_admin_qpairs": 0, 00:27:45.866 "current_io_qpairs": 0, 00:27:45.866 "pending_bdev_io": 0, 00:27:45.866 "completed_nvme_io": 0, 00:27:45.866 "transports": [ 00:27:45.866 { 00:27:45.866 "trtype": "TCP" 00:27:45.866 } 00:27:45.866 ] 00:27:45.866 }, 00:27:45.866 { 00:27:45.866 "name": "nvmf_tgt_poll_group_003", 00:27:45.866 "admin_qpairs": 0, 00:27:45.866 "io_qpairs": 0, 00:27:45.866 "current_admin_qpairs": 0, 00:27:45.866 "current_io_qpairs": 0, 00:27:45.866 "pending_bdev_io": 0, 00:27:45.866 "completed_nvme_io": 0, 00:27:45.866 "transports": [ 00:27:45.866 { 00:27:45.866 "trtype": "TCP" 00:27:45.866 } 00:27:45.866 ] 00:27:45.866 } 00:27:45.866 ] 00:27:45.866 }' 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:45.866 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:45.866 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:45.866 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:45.866 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2427837 00:27:53.970 Initializing NVMe Controllers 00:27:53.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:53.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:53.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:53.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:53.970 Initialization complete. Launching workers. 00:27:53.970 ======================================================== 00:27:53.970 Latency(us) 00:27:53.970 Device Information : IOPS MiB/s Average min max 00:27:53.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6917.90 27.02 9254.76 1688.82 54151.09 00:27:53.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6498.90 25.39 9849.46 1683.03 54255.12 00:27:53.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6932.20 27.08 9234.91 1893.07 56134.01 00:27:53.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7475.50 29.20 8564.50 1820.94 55331.59 00:27:53.970 ======================================================== 00:27:53.970 Total : 27824.49 108.69 9203.27 1683.03 56134.01 00:27:53.970 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.970 rmmod nvme_tcp 00:27:53.970 rmmod nvme_fabrics 00:27:53.970 rmmod nvme_keyring 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2427798 ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2427798 ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427798' 00:27:53.970 killing process with pid 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2427798 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.970 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:57.263 00:27:57.263 real 0m44.739s 00:27:57.263 user 2m38.791s 00:27:57.263 sys 0m9.606s 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.263 ************************************ 00:27:57.263 END TEST nvmf_perf_adq 00:27:57.263 ************************************ 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.263 ************************************ 00:27:57.263 START TEST nvmf_shutdown 00:27:57.263 ************************************ 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:57.263 * Looking for test storage... 00:27:57.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:57.263 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.264 ************************************ 00:27:57.264 START TEST nvmf_shutdown_tc1 00:27:57.264 ************************************ 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.264 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.166 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.167 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:59.426 00:27:59.426 --- 10.0.0.2 ping statistics --- 00:27:59.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.426 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:27:59.426 00:27:59.426 --- 10.0.0.1 ping statistics --- 00:27:59.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.426 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2431127 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2431127 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2431127 ']' 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:59.426 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 [2024-07-23 18:16:06.981275] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:27:59.426 [2024-07-23 18:16:06.981381] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.426 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.426 [2024-07-23 18:16:07.046513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.685 [2024-07-23 18:16:07.136289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.685 [2024-07-23 18:16:07.136361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.685 [2024-07-23 18:16:07.136400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.685 [2024-07-23 18:16:07.136411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.685 [2024-07-23 18:16:07.136421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.685 [2024-07-23 18:16:07.136522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.685 [2024-07-23 18:16:07.136597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.685 [2024-07-23 18:16:07.136659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:59.685 [2024-07-23 18:16:07.136661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 [2024-07-23 18:16:07.291840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.685 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.943 Malloc1 00:27:59.944 [2024-07-23 18:16:07.380286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.944 Malloc2 00:27:59.944 Malloc3 00:27:59.944 Malloc4 00:27:59.944 Malloc5 00:27:59.944 Malloc6 00:28:00.202 Malloc7 00:28:00.202 Malloc8 00:28:00.202 Malloc9 00:28:00.202 Malloc10 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2431303 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2431303 /var/tmp/bdevperf.sock 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2431303 ']' 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.202 { 00:28:00.202 "params": { 00:28:00.202 "name": "Nvme$subsystem", 00:28:00.202 "trtype": "$TEST_TRANSPORT", 00:28:00.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.202 "adrfam": "ipv4", 00:28:00.202 "trsvcid": "$NVMF_PORT", 00:28:00.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.202 "hdgst": ${hdgst:-false}, 00:28:00.202 "ddgst": ${ddgst:-false} 00:28:00.202 }, 00:28:00.202 "method": "bdev_nvme_attach_controller" 00:28:00.202 } 00:28:00.202 EOF 00:28:00.202 )") 00:28:00.202 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.461 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.461 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.461 { 00:28:00.461 "params": { 00:28:00.461 "name": "Nvme$subsystem", 00:28:00.461 "trtype": "$TEST_TRANSPORT", 00:28:00.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.461 "adrfam": "ipv4", 00:28:00.461 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.462 { 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme$subsystem", 00:28:00.462 "trtype": "$TEST_TRANSPORT", 00:28:00.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "$NVMF_PORT", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.462 "hdgst": ${hdgst:-false}, 00:28:00.462 "ddgst": ${ddgst:-false} 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 } 00:28:00.462 EOF 00:28:00.462 )") 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:00.462 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme1", 00:28:00.462 "trtype": "tcp", 00:28:00.462 "traddr": "10.0.0.2", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "4420", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.462 "hdgst": false, 00:28:00.462 "ddgst": false 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 },{ 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme2", 00:28:00.462 "trtype": "tcp", 00:28:00.462 "traddr": "10.0.0.2", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "4420", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:00.462 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:00.462 "hdgst": false, 00:28:00.462 "ddgst": false 00:28:00.462 }, 00:28:00.462 "method": "bdev_nvme_attach_controller" 00:28:00.462 },{ 00:28:00.462 "params": { 00:28:00.462 "name": "Nvme3", 00:28:00.462 "trtype": "tcp", 00:28:00.462 "traddr": "10.0.0.2", 00:28:00.462 "adrfam": "ipv4", 00:28:00.462 "trsvcid": "4420", 00:28:00.462 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme4", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme5", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme6", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme7", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme8", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme9", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 },{ 00:28:00.463 "params": { 00:28:00.463 "name": "Nvme10", 00:28:00.463 "trtype": "tcp", 00:28:00.463 "traddr": "10.0.0.2", 00:28:00.463 "adrfam": "ipv4", 00:28:00.463 "trsvcid": "4420", 00:28:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:00.463 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:00.463 "hdgst": false, 00:28:00.463 "ddgst": false 00:28:00.463 }, 00:28:00.463 "method": "bdev_nvme_attach_controller" 00:28:00.463 }' 00:28:00.463 [2024-07-23 18:16:07.903202] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:00.463 [2024-07-23 18:16:07.903292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:00.463 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.463 [2024-07-23 18:16:07.969407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.463 [2024-07-23 18:16:08.056163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2431303 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:02.363 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:03.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2431303 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2431127 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.297 { 00:28:03.297 "params": { 00:28:03.297 "name": "Nvme$subsystem", 00:28:03.297 "trtype": "$TEST_TRANSPORT", 00:28:03.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.297 "adrfam": "ipv4", 00:28:03.297 "trsvcid": "$NVMF_PORT", 00:28:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.297 "hdgst": ${hdgst:-false}, 00:28:03.297 "ddgst": ${ddgst:-false} 00:28:03.297 }, 00:28:03.297 "method": "bdev_nvme_attach_controller" 00:28:03.297 } 00:28:03.297 EOF 00:28:03.297 )") 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.297 { 00:28:03.297 "params": { 00:28:03.297 "name": "Nvme$subsystem", 00:28:03.297 "trtype": "$TEST_TRANSPORT", 00:28:03.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.297 "adrfam": "ipv4", 00:28:03.297 "trsvcid": "$NVMF_PORT", 00:28:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.297 "hdgst": ${hdgst:-false}, 00:28:03.297 "ddgst": ${ddgst:-false} 00:28:03.297 }, 00:28:03.297 "method": "bdev_nvme_attach_controller" 00:28:03.297 } 00:28:03.297 EOF 00:28:03.297 )") 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.297 { 00:28:03.297 "params": { 00:28:03.297 "name": "Nvme$subsystem", 00:28:03.297 "trtype": "$TEST_TRANSPORT", 00:28:03.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.297 "adrfam": "ipv4", 00:28:03.297 "trsvcid": "$NVMF_PORT", 00:28:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.297 "hdgst": ${hdgst:-false}, 00:28:03.297 "ddgst": ${ddgst:-false} 00:28:03.297 }, 00:28:03.297 "method": "bdev_nvme_attach_controller" 00:28:03.297 } 00:28:03.297 EOF 00:28:03.297 )") 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.297 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.297 { 00:28:03.297 "params": { 00:28:03.297 "name": "Nvme$subsystem", 00:28:03.297 "trtype": "$TEST_TRANSPORT", 00:28:03.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.297 "adrfam": "ipv4", 00:28:03.297 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.298 { 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme$subsystem", 00:28:03.298 "trtype": "$TEST_TRANSPORT", 00:28:03.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "$NVMF_PORT", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.298 "hdgst": ${hdgst:-false}, 00:28:03.298 "ddgst": ${ddgst:-false} 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 } 00:28:03.298 EOF 00:28:03.298 )") 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:03.298 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme1", 00:28:03.298 "trtype": "tcp", 00:28:03.298 "traddr": "10.0.0.2", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "4420", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.298 "hdgst": false, 00:28:03.298 "ddgst": false 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 },{ 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme2", 00:28:03.298 "trtype": "tcp", 00:28:03.298 "traddr": "10.0.0.2", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "4420", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.298 "hdgst": false, 00:28:03.298 "ddgst": false 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 },{ 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme3", 00:28:03.298 "trtype": "tcp", 00:28:03.298 "traddr": "10.0.0.2", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "4420", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:03.298 "hdgst": false, 00:28:03.298 "ddgst": false 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 },{ 00:28:03.298 "params": { 00:28:03.298 "name": "Nvme4", 00:28:03.298 "trtype": "tcp", 00:28:03.298 "traddr": "10.0.0.2", 00:28:03.298 "adrfam": "ipv4", 00:28:03.298 "trsvcid": "4420", 00:28:03.298 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:03.298 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:03.298 "hdgst": false, 00:28:03.298 "ddgst": false 00:28:03.298 }, 00:28:03.298 "method": "bdev_nvme_attach_controller" 00:28:03.298 },{ 00:28:03.298 "params": { 00:28:03.299 "name": "Nvme5", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 },{ 00:28:03.299 "params": { 00:28:03.299 "name": "Nvme6", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 },{ 00:28:03.299 "params": { 00:28:03.299 "name": "Nvme7", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 },{ 00:28:03.299 "params": { 00:28:03.299 "name": "Nvme8", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 },{ 00:28:03.299 "params": { 00:28:03.299 "name": "Nvme9", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 },{ 00:28:03.299 "params": { 00:28:03.299 "name": "Nvme10", 00:28:03.299 "trtype": "tcp", 00:28:03.299 "traddr": "10.0.0.2", 00:28:03.299 "adrfam": "ipv4", 00:28:03.299 "trsvcid": "4420", 00:28:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:03.299 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:03.299 "hdgst": false, 00:28:03.299 "ddgst": false 00:28:03.299 }, 00:28:03.299 "method": "bdev_nvme_attach_controller" 00:28:03.299 }' 00:28:03.299 [2024-07-23 18:16:10.926947] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:03.299 [2024-07-23 18:16:10.927039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431717 ] 00:28:03.557 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.557 [2024-07-23 18:16:10.993543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.557 [2024-07-23 18:16:11.079759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.929 Running I/O for 1 seconds... 00:28:06.306 00:28:06.306 Latency(us) 00:28:06.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.306 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme1n1 : 1.09 243.88 15.24 0.00 0.00 256507.45 17573.36 246997.90 00:28:06.306 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme2n1 : 1.10 241.28 15.08 0.00 0.00 255346.85 12379.02 229910.00 00:28:06.306 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme3n1 : 1.09 242.95 15.18 0.00 0.00 248973.02 12184.84 268746.15 00:28:06.306 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme4n1 : 1.09 234.74 14.67 0.00 0.00 255897.60 25631.86 259425.47 00:28:06.306 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme5n1 : 1.11 231.46 14.47 0.00 0.00 255423.91 19223.89 257872.02 00:28:06.306 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme6n1 : 1.14 224.51 14.03 0.00 0.00 259360.81 20486.07 259425.47 00:28:06.306 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme7n1 : 1.18 271.68 16.98 0.00 0.00 211246.19 13495.56 273406.48 00:28:06.306 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme8n1 : 1.15 227.85 14.24 0.00 0.00 244867.63 3373.89 259425.47 00:28:06.306 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme9n1 : 1.17 221.77 13.86 0.00 0.00 245646.60 8835.22 260978.92 00:28:06.306 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.306 Verification LBA range: start 0x0 length 0x400 00:28:06.306 Nvme10n1 : 1.19 269.49 16.84 0.00 0.00 202368.95 6844.87 287387.50 00:28:06.306 =================================================================================================================== 00:28:06.306 Total : 2409.62 150.60 0.00 0.00 241942.97 3373.89 287387.50 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.306 rmmod nvme_tcp 00:28:06.306 rmmod nvme_fabrics 00:28:06.306 rmmod nvme_keyring 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2431127 ']' 00:28:06.306 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2431127 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2431127 ']' 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2431127 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2431127 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2431127' 00:28:06.307 killing process with pid 2431127 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2431127 00:28:06.307 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2431127 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.877 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.814 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.814 00:28:08.814 real 0m11.848s 00:28:08.814 user 0m33.699s 00:28:08.814 sys 0m3.277s 00:28:08.814 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.814 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.814 ************************************ 00:28:08.814 END TEST nvmf_shutdown_tc1 00:28:08.814 ************************************ 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.074 ************************************ 00:28:09.074 START TEST nvmf_shutdown_tc2 00:28:09.074 ************************************ 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.074 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:28:09.075 00:28:09.075 --- 10.0.0.2 ping statistics --- 00:28:09.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.075 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:09.075 00:28:09.075 --- 10.0.0.1 ping statistics --- 00:28:09.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.075 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2432474 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2432474 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2432474 ']' 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.075 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.334 [2024-07-23 18:16:16.743427] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:09.334 [2024-07-23 18:16:16.743511] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.334 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.334 [2024-07-23 18:16:16.811076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.334 [2024-07-23 18:16:16.901647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.334 [2024-07-23 18:16:16.901705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.334 [2024-07-23 18:16:16.901718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.334 [2024-07-23 18:16:16.901729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.334 [2024-07-23 18:16:16.901739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.334 [2024-07-23 18:16:16.901822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.334 [2024-07-23 18:16:16.901885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.334 [2024-07-23 18:16:16.901954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.334 [2024-07-23 18:16:16.901956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 [2024-07-23 18:16:17.045459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.592 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.593 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.593 Malloc1 00:28:09.593 [2024-07-23 18:16:17.120276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.593 Malloc2 00:28:09.593 Malloc3 00:28:09.593 Malloc4 00:28:09.851 Malloc5 00:28:09.851 Malloc6 00:28:09.851 Malloc7 00:28:09.851 Malloc8 00:28:09.851 Malloc9 00:28:10.110 Malloc10 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2432540 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2432540 /var/tmp/bdevperf.sock 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2432540 ']' 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.110 { 00:28:10.110 "params": { 00:28:10.110 "name": "Nvme$subsystem", 00:28:10.110 "trtype": "$TEST_TRANSPORT", 00:28:10.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.110 "adrfam": "ipv4", 00:28:10.110 "trsvcid": "$NVMF_PORT", 00:28:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.110 "hdgst": ${hdgst:-false}, 00:28:10.110 "ddgst": ${ddgst:-false} 00:28:10.110 }, 00:28:10.110 "method": "bdev_nvme_attach_controller" 00:28:10.110 } 00:28:10.110 EOF 00:28:10.110 )") 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.110 { 00:28:10.110 "params": { 00:28:10.110 "name": "Nvme$subsystem", 00:28:10.110 "trtype": "$TEST_TRANSPORT", 00:28:10.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.110 "adrfam": "ipv4", 00:28:10.110 "trsvcid": "$NVMF_PORT", 00:28:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.110 "hdgst": ${hdgst:-false}, 00:28:10.110 "ddgst": ${ddgst:-false} 00:28:10.110 }, 00:28:10.110 "method": "bdev_nvme_attach_controller" 00:28:10.110 } 00:28:10.110 EOF 00:28:10.110 )") 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.110 { 00:28:10.110 "params": { 00:28:10.110 "name": "Nvme$subsystem", 00:28:10.110 "trtype": "$TEST_TRANSPORT", 00:28:10.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.110 "adrfam": "ipv4", 00:28:10.110 "trsvcid": "$NVMF_PORT", 00:28:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.110 "hdgst": ${hdgst:-false}, 00:28:10.110 "ddgst": ${ddgst:-false} 00:28:10.110 }, 00:28:10.110 "method": "bdev_nvme_attach_controller" 00:28:10.110 } 00:28:10.110 EOF 00:28:10.110 )") 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.110 { 00:28:10.110 "params": { 00:28:10.110 "name": "Nvme$subsystem", 00:28:10.110 "trtype": "$TEST_TRANSPORT", 00:28:10.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.110 "adrfam": "ipv4", 00:28:10.110 "trsvcid": "$NVMF_PORT", 00:28:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.110 "hdgst": ${hdgst:-false}, 00:28:10.110 "ddgst": ${ddgst:-false} 00:28:10.110 }, 00:28:10.110 "method": "bdev_nvme_attach_controller" 00:28:10.110 } 00:28:10.110 EOF 00:28:10.110 )") 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.110 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.110 { 00:28:10.110 "params": { 00:28:10.110 "name": "Nvme$subsystem", 00:28:10.110 "trtype": "$TEST_TRANSPORT", 00:28:10.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.110 "adrfam": "ipv4", 00:28:10.110 "trsvcid": "$NVMF_PORT", 00:28:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.110 "hdgst": ${hdgst:-false}, 00:28:10.110 "ddgst": ${ddgst:-false} 00:28:10.110 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.111 { 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme$subsystem", 00:28:10.111 "trtype": "$TEST_TRANSPORT", 00:28:10.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "$NVMF_PORT", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.111 "hdgst": ${hdgst:-false}, 00:28:10.111 "ddgst": ${ddgst:-false} 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.111 { 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme$subsystem", 00:28:10.111 "trtype": "$TEST_TRANSPORT", 00:28:10.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "$NVMF_PORT", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.111 "hdgst": ${hdgst:-false}, 00:28:10.111 "ddgst": ${ddgst:-false} 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.111 { 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme$subsystem", 00:28:10.111 "trtype": "$TEST_TRANSPORT", 00:28:10.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "$NVMF_PORT", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.111 "hdgst": ${hdgst:-false}, 00:28:10.111 "ddgst": ${ddgst:-false} 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.111 { 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme$subsystem", 00:28:10.111 "trtype": "$TEST_TRANSPORT", 00:28:10.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "$NVMF_PORT", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.111 "hdgst": ${hdgst:-false}, 00:28:10.111 "ddgst": ${ddgst:-false} 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.111 { 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme$subsystem", 00:28:10.111 "trtype": "$TEST_TRANSPORT", 00:28:10.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "$NVMF_PORT", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.111 "hdgst": ${hdgst:-false}, 00:28:10.111 "ddgst": ${ddgst:-false} 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 } 00:28:10.111 EOF 00:28:10.111 )") 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:10.111 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme1", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme2", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme3", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme4", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme5", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme6", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.111 },{ 00:28:10.111 "params": { 00:28:10.111 "name": "Nvme7", 00:28:10.111 "trtype": "tcp", 00:28:10.111 "traddr": "10.0.0.2", 00:28:10.111 "adrfam": "ipv4", 00:28:10.111 "trsvcid": "4420", 00:28:10.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:10.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:10.111 "hdgst": false, 00:28:10.111 "ddgst": false 00:28:10.111 }, 00:28:10.111 "method": "bdev_nvme_attach_controller" 00:28:10.112 },{ 00:28:10.112 "params": { 00:28:10.112 "name": "Nvme8", 00:28:10.112 "trtype": "tcp", 00:28:10.112 "traddr": "10.0.0.2", 00:28:10.112 "adrfam": "ipv4", 00:28:10.112 "trsvcid": "4420", 00:28:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:10.112 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:10.112 "hdgst": false, 00:28:10.112 "ddgst": false 00:28:10.112 }, 00:28:10.112 "method": "bdev_nvme_attach_controller" 00:28:10.112 },{ 00:28:10.112 "params": { 00:28:10.112 "name": "Nvme9", 00:28:10.112 "trtype": "tcp", 00:28:10.112 "traddr": "10.0.0.2", 00:28:10.112 "adrfam": "ipv4", 00:28:10.112 "trsvcid": "4420", 00:28:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:10.112 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:10.112 "hdgst": false, 00:28:10.112 "ddgst": false 00:28:10.112 }, 00:28:10.112 "method": "bdev_nvme_attach_controller" 00:28:10.112 },{ 00:28:10.112 "params": { 00:28:10.112 "name": "Nvme10", 00:28:10.112 "trtype": "tcp", 00:28:10.112 "traddr": "10.0.0.2", 00:28:10.112 "adrfam": "ipv4", 00:28:10.112 "trsvcid": "4420", 00:28:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:10.112 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:10.112 "hdgst": false, 00:28:10.112 "ddgst": false 00:28:10.112 }, 00:28:10.112 "method": "bdev_nvme_attach_controller" 00:28:10.112 }' 00:28:10.112 [2024-07-23 18:16:17.608799] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:10.112 [2024-07-23 18:16:17.608890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432540 ] 00:28:10.112 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.112 [2024-07-23 18:16:17.674761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.112 [2024-07-23 18:16:17.763234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.013 Running I/O for 10 seconds... 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:12.272 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.530 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:12.530 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:12.530 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2432540 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2432540 ']' 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2432540 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2432540 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2432540' 00:28:12.788 killing process with pid 2432540 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2432540 00:28:12.788 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2432540 00:28:12.788 Received shutdown signal, test time was about 0.929792 seconds 00:28:12.788 00:28:12.788 Latency(us) 00:28:12.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.788 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.788 Verification LBA range: start 0x0 length 0x400 00:28:12.788 Nvme1n1 : 0.90 214.13 13.38 0.00 0.00 294989.87 22427.88 254765.13 00:28:12.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme2n1 : 0.93 280.83 17.55 0.00 0.00 219734.60 2985.53 254765.13 00:28:12.789 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme3n1 : 0.93 275.59 17.22 0.00 0.00 219390.86 24078.41 248551.35 00:28:12.789 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme4n1 : 0.92 279.29 17.46 0.00 0.00 212649.34 16019.91 250104.79 00:28:12.789 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme5n1 : 0.89 233.12 14.57 0.00 0.00 244759.92 11602.30 253211.69 00:28:12.789 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme6n1 : 0.91 211.44 13.22 0.00 0.00 268604.05 37088.52 243891.01 00:28:12.789 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme7n1 : 0.90 212.30 13.27 0.00 0.00 261670.43 21068.61 256318.58 00:28:12.789 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme8n1 : 0.89 216.04 13.50 0.00 0.00 250539.17 20000.62 251658.24 00:28:12.789 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme9n1 : 0.91 210.48 13.15 0.00 0.00 252517.89 19612.25 260978.92 00:28:12.789 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:12.789 Verification LBA range: start 0x0 length 0x400 00:28:12.789 Nvme10n1 : 0.92 208.30 13.02 0.00 0.00 249761.94 22233.69 276513.37 00:28:12.789 =================================================================================================================== 00:28:12.789 Total : 2341.51 146.34 0.00 0.00 244669.49 2985.53 276513.37 00:28:13.047 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.421 rmmod nvme_tcp 00:28:14.421 rmmod nvme_fabrics 00:28:14.421 rmmod nvme_keyring 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2432474 ']' 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2432474 ']' 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2432474' 00:28:14.421 killing process with pid 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2432474 00:28:14.421 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2432474 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.680 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.215 00:28:17.215 real 0m7.760s 00:28:17.215 user 0m23.619s 00:28:17.215 sys 0m1.537s 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.215 ************************************ 00:28:17.215 END TEST nvmf_shutdown_tc2 00:28:17.215 ************************************ 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.215 ************************************ 00:28:17.215 START TEST nvmf_shutdown_tc3 00:28:17.215 ************************************ 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.215 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.216 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:28:17.217 00:28:17.217 --- 10.0.0.2 ping statistics --- 00:28:17.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.217 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:17.217 00:28:17.217 --- 10.0.0.1 ping statistics --- 00:28:17.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.217 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2433455 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2433455 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2433455 ']' 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.217 [2024-07-23 18:16:24.573263] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:17.217 [2024-07-23 18:16:24.573369] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.217 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.217 [2024-07-23 18:16:24.638229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.217 [2024-07-23 18:16:24.726754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.217 [2024-07-23 18:16:24.726822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.217 [2024-07-23 18:16:24.726837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.217 [2024-07-23 18:16:24.726847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.217 [2024-07-23 18:16:24.726857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.217 [2024-07-23 18:16:24.726968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.217 [2024-07-23 18:16:24.727030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.217 [2024-07-23 18:16:24.727097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:17.217 [2024-07-23 18:16:24.727099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.217 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.217 [2024-07-23 18:16:24.868530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.476 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.476 Malloc1 00:28:17.476 [2024-07-23 18:16:24.943374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.476 Malloc2 00:28:17.476 Malloc3 00:28:17.476 Malloc4 00:28:17.476 Malloc5 00:28:17.734 Malloc6 00:28:17.734 Malloc7 00:28:17.734 Malloc8 00:28:17.734 Malloc9 00:28:17.734 Malloc10 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2433632 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2433632 /var/tmp/bdevperf.sock 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2433632 ']' 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:17.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.734 { 00:28:17.734 "params": { 00:28:17.734 "name": "Nvme$subsystem", 00:28:17.734 "trtype": "$TEST_TRANSPORT", 00:28:17.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.734 "adrfam": "ipv4", 00:28:17.734 "trsvcid": "$NVMF_PORT", 00:28:17.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.734 "hdgst": ${hdgst:-false}, 00:28:17.734 "ddgst": ${ddgst:-false} 00:28:17.734 }, 00:28:17.734 "method": "bdev_nvme_attach_controller" 00:28:17.734 } 00:28:17.734 EOF 00:28:17.734 )") 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.734 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.735 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.735 { 00:28:17.735 "params": { 00:28:17.735 "name": "Nvme$subsystem", 00:28:17.735 "trtype": "$TEST_TRANSPORT", 00:28:17.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.735 "adrfam": "ipv4", 00:28:17.735 "trsvcid": "$NVMF_PORT", 00:28:17.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.735 "hdgst": ${hdgst:-false}, 00:28:17.735 "ddgst": ${ddgst:-false} 00:28:17.735 }, 00:28:17.735 "method": "bdev_nvme_attach_controller" 00:28:17.735 } 00:28:17.735 EOF 00:28:17.735 )") 00:28:17.735 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.993 "adrfam": "ipv4", 00:28:17.993 "trsvcid": "$NVMF_PORT", 00:28:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.993 "hdgst": ${hdgst:-false}, 00:28:17.993 "ddgst": ${ddgst:-false} 00:28:17.993 }, 00:28:17.993 "method": "bdev_nvme_attach_controller" 00:28:17.993 } 00:28:17.993 EOF 00:28:17.993 )") 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:17.993 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:17.993 { 00:28:17.993 "params": { 00:28:17.993 "name": "Nvme$subsystem", 00:28:17.993 "trtype": "$TEST_TRANSPORT", 00:28:17.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "$NVMF_PORT", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.994 "hdgst": ${hdgst:-false}, 00:28:17.994 "ddgst": ${ddgst:-false} 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 } 00:28:17.994 EOF 00:28:17.994 )") 00:28:17.994 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:17.994 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:17.994 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:17.994 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme1", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme2", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme3", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme4", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme5", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme6", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme7", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme8", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme9", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 },{ 00:28:17.994 "params": { 00:28:17.994 "name": "Nvme10", 00:28:17.994 "trtype": "tcp", 00:28:17.994 "traddr": "10.0.0.2", 00:28:17.994 "adrfam": "ipv4", 00:28:17.994 "trsvcid": "4420", 00:28:17.994 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:17.994 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:17.994 "hdgst": false, 00:28:17.994 "ddgst": false 00:28:17.994 }, 00:28:17.994 "method": "bdev_nvme_attach_controller" 00:28:17.994 }' 00:28:17.994 [2024-07-23 18:16:25.430816] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:17.994 [2024-07-23 18:16:25.430907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433632 ] 00:28:17.994 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.994 [2024-07-23 18:16:25.495390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.994 [2024-07-23 18:16:25.582272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.892 Running I/O for 10 seconds... 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:19.892 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:20.150 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2433455 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2433455 ']' 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2433455 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.408 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2433455 00:28:20.672 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.672 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.672 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2433455' 00:28:20.672 killing process with pid 2433455 00:28:20.672 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2433455 00:28:20.672 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2433455 00:28:20.672 [2024-07-23 18:16:28.071029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.071862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87910 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.073989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.672 [2024-07-23 18:16:28.074139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.074811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a87dd0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.078990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.079487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19912d0 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.080996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.673 [2024-07-23 18:16:28.081383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.081738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991790 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.082997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.083636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991c70 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.084995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.674 [2024-07-23 18:16:28.085064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.085230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992130 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.101575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x276e760 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.101830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.101960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.101998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ff580 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fed10 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a610 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2766700 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d5f0 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d230 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.102928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.102978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.102997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2867440 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.103098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290de50 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.103258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.675 [2024-07-23 18:16:28.103383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2742290 is same with the state(5) to be set 00:28:20.675 [2024-07-23 18:16:28.103919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.103945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.103972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.103994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.675 [2024-07-23 18:16:28.104475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.675 [2024-07-23 18:16:28.104489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.104974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.105929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.105973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:20.676 [2024-07-23 18:16:28.106062] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28f5dd0 was disconnected and freed. reset controller. 00:28:20.676 [2024-07-23 18:16:28.106120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.676 [2024-07-23 18:16:28.106962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.676 [2024-07-23 18:16:28.106978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.106994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.107981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.107997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f7310 is same with the state(5) to be set 00:28:20.677 [2024-07-23 18:16:28.108300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28f7310 was disconnected and freed. reset controller. 00:28:20.677 [2024-07-23 18:16:28.108372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.108976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.108993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.677 [2024-07-23 18:16:28.109204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.677 [2024-07-23 18:16:28.109220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.109234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.109251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.109265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.109281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.109295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.120777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.120845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.120862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.120877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.120893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.120907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.120923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.120938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.120954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.120971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.121942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.121958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x273dfe0 is same with the state(5) to be set 00:28:20.678 [2024-07-23 18:16:28.122073] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x273dfe0 was disconnected and freed. reset controller. 00:28:20.678 [2024-07-23 18:16:28.122583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.122980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.122995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.678 [2024-07-23 18:16:28.123719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.678 [2024-07-23 18:16:28.123734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.123977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.123996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.124601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.124617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.125219] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28dfb90 was disconnected and freed. reset controller. 00:28:20.679 [2024-07-23 18:16:28.125351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x276e760 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27ff580 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27fed10 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a610 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2766700 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290d5f0 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290d230 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2867440 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290de50 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.125586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2742290 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.130962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:20.679 [2024-07-23 18:16:28.131029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:20.679 [2024-07-23 18:16:28.131049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:20.679 [2024-07-23 18:16:28.131066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:20.679 [2024-07-23 18:16:28.132100] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.132179] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.132246] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.132321] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.132738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.679 [2024-07-23 18:16:28.132773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x276e760 with addr=10.0.0.2, port=4420 00:28:20.679 [2024-07-23 18:16:28.132792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x276e760 is same with the state(5) to be set 00:28:20.679 [2024-07-23 18:16:28.132894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.679 [2024-07-23 18:16:28.132920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2766700 with addr=10.0.0.2, port=4420 00:28:20.679 [2024-07-23 18:16:28.132937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2766700 is same with the state(5) to be set 00:28:20.679 [2024-07-23 18:16:28.133030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.679 [2024-07-23 18:16:28.133055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x290d5f0 with addr=10.0.0.2, port=4420 00:28:20.679 [2024-07-23 18:16:28.133071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d5f0 is same with the state(5) to be set 00:28:20.679 [2024-07-23 18:16:28.133168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.679 [2024-07-23 18:16:28.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27ff580 with addr=10.0.0.2, port=4420 00:28:20.679 [2024-07-23 18:16:28.133209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ff580 is same with the state(5) to be set 00:28:20.679 [2024-07-23 18:16:28.133268] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.133360] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:20.679 [2024-07-23 18:16:28.133492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x276e760 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.133524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2766700 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.133544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290d5f0 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.133563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27ff580 (9): Bad file descriptor 00:28:20.679 [2024-07-23 18:16:28.133680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:20.679 [2024-07-23 18:16:28.133703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:20.679 [2024-07-23 18:16:28.133722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:20.679 [2024-07-23 18:16:28.133743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:20.679 [2024-07-23 18:16:28.133758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:20.679 [2024-07-23 18:16:28.133782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:20.679 [2024-07-23 18:16:28.133800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:20.679 [2024-07-23 18:16:28.133815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:20.679 [2024-07-23 18:16:28.133828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:20.679 [2024-07-23 18:16:28.133846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:20.679 [2024-07-23 18:16:28.133859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:20.679 [2024-07-23 18:16:28.133872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:20.679 [2024-07-23 18:16:28.133924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.679 [2024-07-23 18:16:28.133943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.679 [2024-07-23 18:16:28.133955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.679 [2024-07-23 18:16:28.133966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.679 [2024-07-23 18:16:28.135436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.679 [2024-07-23 18:16:28.135872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.679 [2024-07-23 18:16:28.135886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.135902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.135933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.135947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.135978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.135994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.136974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.136989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.137463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.137478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28e77c0 is same with the state(5) to be set 00:28:20.680 [2024-07-23 18:16:28.138743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.138980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.138994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.680 [2024-07-23 18:16:28.139390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.680 [2024-07-23 18:16:28.139404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.139987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.681 [2024-07-23 18:16:28.140604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.681 [2024-07-23 18:16:28.140620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.140634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.140650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.140680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.140694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.140710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.140724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.140740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.140754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.140768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f4960 is same with the state(5) to be set 00:28:20.682 [2024-07-23 18:16:28.142015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.142976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.142993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.143007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.143037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.143053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.143067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.143083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.682 [2024-07-23 18:16:28.143097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.682 [2024-07-23 18:16:28.143113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.143976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.143992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.144007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.144023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.144037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.144052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f81f0 is same with the state(5) to be set 00:28:20.683 [2024-07-23 18:16:28.145294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.145323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.145346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.145363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.145380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.683 [2024-07-23 18:16:28.145395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.683 [2024-07-23 18:16:28.145417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.145977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.145991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.684 [2024-07-23 18:16:28.146529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.684 [2024-07-23 18:16:28.146544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.942 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:20.942 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:21.201 [2024-07-23 18:16:28.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.673983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.673996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.674011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.674023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.674038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.674050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.674065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.674092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dee30 is same with the state(5) to be set 00:28:21.201 [2024-07-23 18:16:28.675506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.201 [2024-07-23 18:16:28.675804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.201 [2024-07-23 18:16:28.675818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.675845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.675860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.675872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.675887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.675899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.675928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.675941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.675959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.675973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.676979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.202 [2024-07-23 18:16:28.676992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.202 [2024-07-23 18:16:28.677006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.677584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e0330 is same with the state(5) to be set 00:28:21.203 [2024-07-23 18:16:28.678861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.678882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.678915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.678929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.678943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.678956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.203 [2024-07-23 18:16:28.679506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.203 [2024-07-23 18:16:28.679520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.679969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.679982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.204 [2024-07-23 18:16:28.680689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.204 [2024-07-23 18:16:28.680702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.205 [2024-07-23 18:16:28.680882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.205 [2024-07-23 18:16:28.680910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e15e0 is same with the state(5) to be set 00:28:21.205 [2024-07-23 18:16:28.682515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.205 [2024-07-23 18:16:28.682548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:21.205 [2024-07-23 18:16:28.682565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:21.205 [2024-07-23 18:16:28.682580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:21.205 [2024-07-23 18:16:28.682694] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.205 [2024-07-23 18:16:28.682718] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.205 [2024-07-23 18:16:28.682825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:21.205 task offset: 24576 on job bdev=Nvme3n1 fails 00:28:21.205 00:28:21.205 Latency(us) 00:28:21.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme1n1 ended in about 0.94 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme1n1 : 0.94 136.38 8.52 68.19 0.00 309476.31 20194.80 268746.15 00:28:21.205 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme2n1 ended in about 0.94 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme2n1 : 0.94 150.78 9.42 67.96 0.00 283752.73 35535.08 270299.59 00:28:21.205 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme3n1 ended in about 0.93 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme3n1 : 0.93 207.15 12.95 69.05 0.00 219907.03 16990.81 240784.12 00:28:21.205 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme4n1 ended in about 0.93 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme4n1 : 0.93 206.91 12.93 68.97 0.00 215603.77 20097.71 267192.70 00:28:21.205 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme5n1 ended in about 0.93 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme5n1 : 0.93 206.67 12.92 68.89 0.00 211274.15 18738.44 268746.15 00:28:21.205 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme6n1 ended in about 0.95 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme6n1 : 0.95 139.67 8.73 67.72 0.00 275281.66 19806.44 274959.93 00:28:21.205 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme7n1 ended in about 1.47 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme7n1 : 1.47 89.56 5.60 43.42 0.00 441916.23 15922.82 742547.15 00:28:21.205 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme8n1 ended in about 1.48 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme8n1 : 1.48 86.65 5.42 43.32 0.00 446235.43 18447.17 773616.07 00:28:21.205 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme9n1 ended in about 1.48 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme9n1 : 1.48 86.45 5.40 43.23 0.00 441391.03 17282.09 689729.99 00:28:21.205 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.205 Job: Nvme10n1 ended in about 0.93 seconds with error 00:28:21.205 Verification LBA range: start 0x0 length 0x400 00:28:21.205 Nvme10n1 : 0.93 137.58 8.60 68.79 0.00 252162.65 21942.42 299815.06 00:28:21.205 =================================================================================================================== 00:28:21.205 Total : 1447.81 90.49 609.54 0.00 301246.37 15922.82 773616.07 00:28:21.205 [2024-07-23 18:16:28.709013] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.205 [2024-07-23 18:16:28.709111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:21.205 [2024-07-23 18:16:28.709529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.205 [2024-07-23 18:16:28.709567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2742290 with addr=10.0.0.2, port=4420 00:28:21.205 [2024-07-23 18:16:28.709599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2742290 is same with the state(5) to be set 00:28:21.205 [2024-07-23 18:16:28.709725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.205 [2024-07-23 18:16:28.709752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x290de50 with addr=10.0.0.2, port=4420 00:28:21.205 [2024-07-23 18:16:28.709768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290de50 is same with the state(5) to be set 00:28:21.205 [2024-07-23 18:16:28.709947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.205 [2024-07-23 18:16:28.709988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x290d230 with addr=10.0.0.2, port=4420 00:28:21.205 [2024-07-23 18:16:28.710004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d230 is same with the state(5) to be set 00:28:21.205 [2024-07-23 18:16:28.710201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.205 [2024-07-23 18:16:28.710248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2867440 with addr=10.0.0.2, port=4420 00:28:21.205 [2024-07-23 18:16:28.710263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2867440 is same with the state(5) to be set 00:28:21.205 [2024-07-23 18:16:28.711845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:21.205 [2024-07-23 18:16:28.711872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:21.205 [2024-07-23 18:16:28.711906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:21.205 [2024-07-23 18:16:28.711922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:21.205 [2024-07-23 18:16:28.712153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.205 [2024-07-23 18:16:28.712206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223a610 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.712243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a610 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.712384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.206 [2024-07-23 18:16:28.712411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27fed10 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.712427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fed10 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.712452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2742290 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.712476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290de50 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.712493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290d230 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.712510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2867440 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.712562] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.206 [2024-07-23 18:16:28.712587] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.206 [2024-07-23 18:16:28.712608] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.206 [2024-07-23 18:16:28.712642] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.206 [2024-07-23 18:16:28.713244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.206 [2024-07-23 18:16:28.713272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27ff580 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.713302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ff580 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.713436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.206 [2024-07-23 18:16:28.713466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x290d5f0 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.713481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d5f0 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.713601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.206 [2024-07-23 18:16:28.713627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2766700 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.713642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2766700 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.713760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.206 [2024-07-23 18:16:28.713808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x276e760 with addr=10.0.0.2, port=4420 00:28:21.206 [2024-07-23 18:16:28.713823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x276e760 is same with the state(5) to be set 00:28:21.206 [2024-07-23 18:16:28.713842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a610 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.713860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27fed10 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.713877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.713890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.713907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.206 [2024-07-23 18:16:28.713926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.713945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.713958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:21.206 [2024-07-23 18:16:28.713994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27ff580 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.714233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x290d5f0 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.714248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2766700 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.714264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x276e760 (9): Bad file descriptor 00:28:21.206 [2024-07-23 18:16:28.714278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:21.206 [2024-07-23 18:16:28.714564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:21.206 [2024-07-23 18:16:28.714575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:21.206 [2024-07-23 18:16:28.714609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.206 [2024-07-23 18:16:28.714645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2433632 00:28:22.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2433632) - No such process 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.140 rmmod nvme_tcp 00:28:22.140 rmmod nvme_fabrics 00:28:22.140 rmmod nvme_keyring 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.140 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.042 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.042 00:28:24.042 real 0m7.361s 00:28:24.042 user 0m17.729s 00:28:24.042 sys 0m1.456s 00:28:24.042 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.042 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:24.042 ************************************ 00:28:24.042 END TEST nvmf_shutdown_tc3 00:28:24.042 ************************************ 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:24.301 00:28:24.301 real 0m27.212s 00:28:24.301 user 1m15.149s 00:28:24.301 sys 0m6.428s 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 ************************************ 00:28:24.301 END TEST nvmf_shutdown 00:28:24.301 ************************************ 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:28:24.301 00:28:24.301 real 16m36.446s 00:28:24.301 user 46m52.825s 00:28:24.301 sys 3m45.681s 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.301 18:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 ************************************ 00:28:24.301 END TEST nvmf_target_extra 00:28:24.301 ************************************ 00:28:24.301 18:16:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:24.301 18:16:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:24.301 18:16:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:24.301 18:16:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.301 18:16:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 ************************************ 00:28:24.301 START TEST nvmf_host 00:28:24.301 ************************************ 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:24.301 * Looking for test storage... 00:28:24.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 ************************************ 00:28:24.301 START TEST nvmf_multicontroller 00:28:24.301 ************************************ 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:24.301 * Looking for test storage... 00:28:24.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.301 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:24.302 18:16:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.842 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.843 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.843 18:16:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:28:26.843 00:28:26.843 --- 10.0.0.2 ping statistics --- 00:28:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.843 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:28:26.843 00:28:26.843 --- 10.0.0.1 ping statistics --- 00:28:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.843 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2436176 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2436176 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2436176 ']' 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.843 [2024-07-23 18:16:34.173751] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:26.843 [2024-07-23 18:16:34.173822] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.843 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.843 [2024-07-23 18:16:34.235388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:26.843 [2024-07-23 18:16:34.318391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.843 [2024-07-23 18:16:34.318448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.843 [2024-07-23 18:16:34.318471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.843 [2024-07-23 18:16:34.318481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.843 [2024-07-23 18:16:34.318490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.843 [2024-07-23 18:16:34.318578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.843 [2024-07-23 18:16:34.318647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.843 [2024-07-23 18:16:34.318644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.843 [2024-07-23 18:16:34.459850] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.843 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 Malloc0 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 [2024-07-23 18:16:34.522407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 [2024-07-23 18:16:34.530269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 Malloc1 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2436206 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2436206 /var/tmp/bdevperf.sock 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2436206 ']' 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.149 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 NVMe0n1 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 18:16:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 1 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 request: 00:28:27.408 { 00:28:27.408 "name": "NVMe0", 00:28:27.408 "trtype": "tcp", 00:28:27.408 "traddr": "10.0.0.2", 00:28:27.408 "adrfam": "ipv4", 00:28:27.408 "trsvcid": "4420", 00:28:27.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.408 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:27.408 "hostaddr": "10.0.0.2", 00:28:27.408 "hostsvcid": "60000", 00:28:27.408 "prchk_reftag": false, 00:28:27.408 "prchk_guard": false, 00:28:27.408 "hdgst": false, 00:28:27.408 "ddgst": false, 00:28:27.408 "method": "bdev_nvme_attach_controller", 00:28:27.408 "req_id": 1 00:28:27.408 } 00:28:27.408 Got JSON-RPC error response 00:28:27.408 response: 00:28:27.408 { 00:28:27.408 "code": -114, 00:28:27.408 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:27.408 } 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 request: 00:28:27.408 { 00:28:27.408 "name": "NVMe0", 00:28:27.408 "trtype": "tcp", 00:28:27.408 "traddr": "10.0.0.2", 00:28:27.408 "adrfam": "ipv4", 00:28:27.408 "trsvcid": "4420", 00:28:27.408 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.408 "hostaddr": "10.0.0.2", 00:28:27.408 "hostsvcid": "60000", 00:28:27.408 "prchk_reftag": false, 00:28:27.408 "prchk_guard": false, 00:28:27.408 "hdgst": false, 00:28:27.408 "ddgst": false, 00:28:27.408 "method": "bdev_nvme_attach_controller", 00:28:27.408 "req_id": 1 00:28:27.408 } 00:28:27.408 Got JSON-RPC error response 00:28:27.408 response: 00:28:27.408 { 00:28:27.408 "code": -114, 00:28:27.408 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:27.408 } 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 request: 00:28:27.408 { 00:28:27.408 "name": "NVMe0", 00:28:27.408 "trtype": "tcp", 00:28:27.408 "traddr": "10.0.0.2", 00:28:27.408 "adrfam": "ipv4", 00:28:27.408 "trsvcid": "4420", 00:28:27.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.408 "hostaddr": "10.0.0.2", 00:28:27.408 "hostsvcid": "60000", 00:28:27.408 "prchk_reftag": false, 00:28:27.408 "prchk_guard": false, 00:28:27.408 "hdgst": false, 00:28:27.408 "ddgst": false, 00:28:27.408 "multipath": "disable", 00:28:27.408 "method": "bdev_nvme_attach_controller", 00:28:27.408 "req_id": 1 00:28:27.408 } 00:28:27.408 Got JSON-RPC error response 00:28:27.408 response: 00:28:27.408 { 00:28:27.408 "code": -114, 00:28:27.408 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:27.408 } 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:27.408 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.409 request: 00:28:27.409 { 00:28:27.409 "name": "NVMe0", 00:28:27.409 "trtype": "tcp", 00:28:27.409 "traddr": "10.0.0.2", 00:28:27.409 "adrfam": "ipv4", 00:28:27.409 "trsvcid": "4420", 00:28:27.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.409 "hostaddr": "10.0.0.2", 00:28:27.409 "hostsvcid": "60000", 00:28:27.409 "prchk_reftag": false, 00:28:27.409 "prchk_guard": false, 00:28:27.409 "hdgst": false, 00:28:27.409 "ddgst": false, 00:28:27.409 "multipath": "failover", 00:28:27.409 "method": "bdev_nvme_attach_controller", 00:28:27.409 "req_id": 1 00:28:27.409 } 00:28:27.409 Got JSON-RPC error response 00:28:27.409 response: 00:28:27.409 { 00:28:27.409 "code": -114, 00:28:27.409 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:27.409 } 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.409 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.666 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.924 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:27.924 18:16:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:29.299 0 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2436206 ']' 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436206' 00:28:29.299 killing process with pid 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2436206 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:29.299 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:29.299 [2024-07-23 18:16:34.630880] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:29.299 [2024-07-23 18:16:34.630984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436206 ] 00:28:29.299 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.299 [2024-07-23 18:16:34.692017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.299 [2024-07-23 18:16:34.778651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.299 [2024-07-23 18:16:35.472689] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 2d604d62-801b-4bd4-8477-142e740bec14 already exists 00:28:29.299 [2024-07-23 18:16:35.472731] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:2d604d62-801b-4bd4-8477-142e740bec14 alias for bdev NVMe1n1 00:28:29.299 [2024-07-23 18:16:35.472746] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:29.299 Running I/O for 1 seconds... 00:28:29.299 00:28:29.299 Latency(us) 00:28:29.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.299 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:29.299 NVMe0n1 : 1.00 16838.13 65.77 0.00 0.00 7589.15 6699.24 15825.73 00:28:29.299 =================================================================================================================== 00:28:29.299 Total : 16838.13 65.77 0.00 0.00 7589.15 6699.24 15825.73 00:28:29.299 Received shutdown signal, test time was about 1.000000 seconds 00:28:29.299 00:28:29.299 Latency(us) 00:28:29.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.299 =================================================================================================================== 00:28:29.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.299 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:29.299 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.300 rmmod nvme_tcp 00:28:29.300 rmmod nvme_fabrics 00:28:29.300 rmmod nvme_keyring 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2436176 ']' 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2436176 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2436176 ']' 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2436176 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436176 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436176' 00:28:29.300 killing process with pid 2436176 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2436176 00:28:29.300 18:16:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2436176 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.558 18:16:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.093 00:28:32.093 real 0m7.365s 00:28:32.093 user 0m11.345s 00:28:32.093 sys 0m2.405s 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:32.093 ************************************ 00:28:32.093 END TEST nvmf_multicontroller 00:28:32.093 ************************************ 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.093 ************************************ 00:28:32.093 START TEST nvmf_aer 00:28:32.093 ************************************ 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:32.093 * Looking for test storage... 00:28:32.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.093 18:16:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:33.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:33.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:33.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:33.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:33.998 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:33.999 00:28:33.999 --- 10.0.0.2 ping statistics --- 00:28:33.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.999 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:28:33.999 00:28:33.999 --- 10.0.0.1 ping statistics --- 00:28:33.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.999 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2438411 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2438411 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2438411 ']' 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.999 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:33.999 [2024-07-23 18:16:41.611020] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:33.999 [2024-07-23 18:16:41.611091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.999 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.257 [2024-07-23 18:16:41.674871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.257 [2024-07-23 18:16:41.763706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.257 [2024-07-23 18:16:41.763760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.257 [2024-07-23 18:16:41.763775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.257 [2024-07-23 18:16:41.763786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.257 [2024-07-23 18:16:41.763796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.257 [2024-07-23 18:16:41.763889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.257 [2024-07-23 18:16:41.764010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.257 [2024-07-23 18:16:41.764050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.257 [2024-07-23 18:16:41.764054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.257 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.257 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:34.257 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.257 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.257 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.258 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.258 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.258 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.258 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.258 [2024-07-23 18:16:41.914850] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.516 Malloc0 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.516 [2024-07-23 18:16:41.968761] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.516 [ 00:28:34.516 { 00:28:34.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:34.516 "subtype": "Discovery", 00:28:34.516 "listen_addresses": [], 00:28:34.516 "allow_any_host": true, 00:28:34.516 "hosts": [] 00:28:34.516 }, 00:28:34.516 { 00:28:34.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.516 "subtype": "NVMe", 00:28:34.516 "listen_addresses": [ 00:28:34.516 { 00:28:34.516 "trtype": "TCP", 00:28:34.516 "adrfam": "IPv4", 00:28:34.516 "traddr": "10.0.0.2", 00:28:34.516 "trsvcid": "4420" 00:28:34.516 } 00:28:34.516 ], 00:28:34.516 "allow_any_host": true, 00:28:34.516 "hosts": [], 00:28:34.516 "serial_number": "SPDK00000000000001", 00:28:34.516 "model_number": "SPDK bdev Controller", 00:28:34.516 "max_namespaces": 2, 00:28:34.516 "min_cntlid": 1, 00:28:34.516 "max_cntlid": 65519, 00:28:34.516 "namespaces": [ 00:28:34.516 { 00:28:34.516 "nsid": 1, 00:28:34.516 "bdev_name": "Malloc0", 00:28:34.516 "name": "Malloc0", 00:28:34.516 "nguid": "3AE2E802C98C4BC98A0570B13FDC3886", 00:28:34.516 "uuid": "3ae2e802-c98c-4bc9-8a05-70b13fdc3886" 00:28:34.516 } 00:28:34.516 ] 00:28:34.516 } 00:28:34.516 ] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2438552 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:34.516 18:16:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:34.516 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.516 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:34.516 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:34.516 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:34.516 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.775 Malloc1 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.775 [ 00:28:34.775 { 00:28:34.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:34.775 "subtype": "Discovery", 00:28:34.775 "listen_addresses": [], 00:28:34.775 "allow_any_host": true, 00:28:34.775 "hosts": [] 00:28:34.775 }, 00:28:34.775 { 00:28:34.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.775 "subtype": "NVMe", 00:28:34.775 "listen_addresses": [ 00:28:34.775 { 00:28:34.775 "trtype": "TCP", 00:28:34.775 "adrfam": "IPv4", 00:28:34.775 "traddr": "10.0.0.2", 00:28:34.775 "trsvcid": "4420" 00:28:34.775 } 00:28:34.775 ], 00:28:34.775 "allow_any_host": true, 00:28:34.775 "hosts": [], 00:28:34.775 "serial_number": "SPDK00000000000001", 00:28:34.775 "model_number": "SPDK bdev Controller", 00:28:34.775 "max_namespaces": 2, 00:28:34.775 "min_cntlid": 1, 00:28:34.775 "max_cntlid": 65519, 00:28:34.775 "namespaces": [ 00:28:34.775 { 00:28:34.775 "nsid": 1, 00:28:34.775 "bdev_name": "Malloc0", 00:28:34.775 "name": "Malloc0", 00:28:34.775 "nguid": "3AE2E802C98C4BC98A0570B13FDC3886", 00:28:34.775 "uuid": "3ae2e802-c98c-4bc9-8a05-70b13fdc3886" 00:28:34.775 }, 00:28:34.775 { 00:28:34.775 "nsid": 2, 00:28:34.775 "bdev_name": "Malloc1", 00:28:34.775 "name": "Malloc1", 00:28:34.775 "nguid": "46106B0554D743F9BA45DABE0F2CA977", 00:28:34.775 "uuid": "46106b05-54d7-43f9-ba45-dabe0f2ca977" 00:28:34.775 } 00:28:34.775 ] 00:28:34.775 } 00:28:34.775 ] 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2438552 00:28:34.775 Asynchronous Event Request test 00:28:34.775 Attaching to 10.0.0.2 00:28:34.775 Attached to 10.0.0.2 00:28:34.775 Registering asynchronous event callbacks... 00:28:34.775 Starting namespace attribute notice tests for all controllers... 00:28:34.775 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:34.775 aer_cb - Changed Namespace 00:28:34.775 Cleaning up... 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.775 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.776 rmmod nvme_tcp 00:28:34.776 rmmod nvme_fabrics 00:28:34.776 rmmod nvme_keyring 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2438411 ']' 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2438411 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2438411 ']' 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2438411 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438411 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438411' 00:28:34.776 killing process with pid 2438411 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2438411 00:28:34.776 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2438411 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.035 18:16:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.567 00:28:37.567 real 0m5.352s 00:28:37.567 user 0m4.084s 00:28:37.567 sys 0m1.941s 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:37.567 ************************************ 00:28:37.567 END TEST nvmf_aer 00:28:37.567 ************************************ 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.567 ************************************ 00:28:37.567 START TEST nvmf_async_init 00:28:37.567 ************************************ 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:37.567 * Looking for test storage... 00:28:37.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.567 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=18d7c2c0989e4fccbd0791993c784fe4 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.568 18:16:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:28:39.471 00:28:39.471 --- 10.0.0.2 ping statistics --- 00:28:39.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.471 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:39.471 00:28:39.471 --- 10.0.0.1 ping statistics --- 00:28:39.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.471 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.471 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.472 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.472 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.472 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.472 18:16:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2440490 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2440490 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2440490 ']' 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.472 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.472 [2024-07-23 18:16:47.046791] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:39.472 [2024-07-23 18:16:47.046870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.472 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.472 [2024-07-23 18:16:47.107739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.730 [2024-07-23 18:16:47.190347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.730 [2024-07-23 18:16:47.190404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.730 [2024-07-23 18:16:47.190426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.730 [2024-07-23 18:16:47.190437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.730 [2024-07-23 18:16:47.190447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.730 [2024-07-23 18:16:47.190473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 [2024-07-23 18:16:47.325548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 null0 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 18d7c2c0989e4fccbd0791993c784fe4 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.730 [2024-07-23 18:16:47.365845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.730 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.988 nvme0n1 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.988 [ 00:28:39.988 { 00:28:39.988 "name": "nvme0n1", 00:28:39.988 "aliases": [ 00:28:39.988 "18d7c2c0-989e-4fcc-bd07-91993c784fe4" 00:28:39.988 ], 00:28:39.988 "product_name": "NVMe disk", 00:28:39.988 "block_size": 512, 00:28:39.988 "num_blocks": 2097152, 00:28:39.988 "uuid": "18d7c2c0-989e-4fcc-bd07-91993c784fe4", 00:28:39.988 "assigned_rate_limits": { 00:28:39.988 "rw_ios_per_sec": 0, 00:28:39.988 "rw_mbytes_per_sec": 0, 00:28:39.988 "r_mbytes_per_sec": 0, 00:28:39.988 "w_mbytes_per_sec": 0 00:28:39.988 }, 00:28:39.988 "claimed": false, 00:28:39.988 "zoned": false, 00:28:39.988 "supported_io_types": { 00:28:39.988 "read": true, 00:28:39.988 "write": true, 00:28:39.988 "unmap": false, 00:28:39.988 "flush": true, 00:28:39.988 "reset": true, 00:28:39.988 "nvme_admin": true, 00:28:39.988 "nvme_io": true, 00:28:39.988 "nvme_io_md": false, 00:28:39.988 "write_zeroes": true, 00:28:39.988 "zcopy": false, 00:28:39.988 "get_zone_info": false, 00:28:39.988 "zone_management": false, 00:28:39.988 "zone_append": false, 00:28:39.988 "compare": true, 00:28:39.988 "compare_and_write": true, 00:28:39.988 "abort": true, 00:28:39.988 "seek_hole": false, 00:28:39.988 "seek_data": false, 00:28:39.988 "copy": true, 00:28:39.988 "nvme_iov_md": false 00:28:39.988 }, 00:28:39.988 "memory_domains": [ 00:28:39.988 { 00:28:39.988 "dma_device_id": "system", 00:28:39.988 "dma_device_type": 1 00:28:39.988 } 00:28:39.988 ], 00:28:39.988 "driver_specific": { 00:28:39.988 "nvme": [ 00:28:39.988 { 00:28:39.988 "trid": { 00:28:39.988 "trtype": "TCP", 00:28:39.988 "adrfam": "IPv4", 00:28:39.988 "traddr": "10.0.0.2", 00:28:39.988 "trsvcid": "4420", 00:28:39.988 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:39.988 }, 00:28:39.988 "ctrlr_data": { 00:28:39.988 "cntlid": 1, 00:28:39.988 "vendor_id": "0x8086", 00:28:39.988 "model_number": "SPDK bdev Controller", 00:28:39.988 "serial_number": "00000000000000000000", 00:28:39.988 "firmware_revision": "24.09", 00:28:39.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.988 "oacs": { 00:28:39.988 "security": 0, 00:28:39.988 "format": 0, 00:28:39.988 "firmware": 0, 00:28:39.988 "ns_manage": 0 00:28:39.988 }, 00:28:39.988 "multi_ctrlr": true, 00:28:39.988 "ana_reporting": false 00:28:39.988 }, 00:28:39.988 "vs": { 00:28:39.988 "nvme_version": "1.3" 00:28:39.988 }, 00:28:39.988 "ns_data": { 00:28:39.988 "id": 1, 00:28:39.988 "can_share": true 00:28:39.988 } 00:28:39.988 } 00:28:39.988 ], 00:28:39.988 "mp_policy": "active_passive" 00:28:39.988 } 00:28:39.988 } 00:28:39.988 ] 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.988 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:39.988 [2024-07-23 18:16:47.614312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:39.988 [2024-07-23 18:16:47.614407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd5680 (9): Bad file descriptor 00:28:40.246 [2024-07-23 18:16:47.746456] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 [ 00:28:40.246 { 00:28:40.246 "name": "nvme0n1", 00:28:40.246 "aliases": [ 00:28:40.246 "18d7c2c0-989e-4fcc-bd07-91993c784fe4" 00:28:40.246 ], 00:28:40.246 "product_name": "NVMe disk", 00:28:40.246 "block_size": 512, 00:28:40.246 "num_blocks": 2097152, 00:28:40.246 "uuid": "18d7c2c0-989e-4fcc-bd07-91993c784fe4", 00:28:40.246 "assigned_rate_limits": { 00:28:40.246 "rw_ios_per_sec": 0, 00:28:40.246 "rw_mbytes_per_sec": 0, 00:28:40.246 "r_mbytes_per_sec": 0, 00:28:40.246 "w_mbytes_per_sec": 0 00:28:40.246 }, 00:28:40.246 "claimed": false, 00:28:40.246 "zoned": false, 00:28:40.246 "supported_io_types": { 00:28:40.246 "read": true, 00:28:40.246 "write": true, 00:28:40.246 "unmap": false, 00:28:40.246 "flush": true, 00:28:40.246 "reset": true, 00:28:40.246 "nvme_admin": true, 00:28:40.246 "nvme_io": true, 00:28:40.246 "nvme_io_md": false, 00:28:40.246 "write_zeroes": true, 00:28:40.246 "zcopy": false, 00:28:40.246 "get_zone_info": false, 00:28:40.246 "zone_management": false, 00:28:40.246 "zone_append": false, 00:28:40.246 "compare": true, 00:28:40.246 "compare_and_write": true, 00:28:40.246 "abort": true, 00:28:40.246 "seek_hole": false, 00:28:40.246 "seek_data": false, 00:28:40.246 "copy": true, 00:28:40.246 "nvme_iov_md": false 00:28:40.246 }, 00:28:40.246 "memory_domains": [ 00:28:40.246 { 00:28:40.246 "dma_device_id": "system", 00:28:40.246 "dma_device_type": 1 00:28:40.246 } 00:28:40.246 ], 00:28:40.246 "driver_specific": { 00:28:40.246 "nvme": [ 00:28:40.246 { 00:28:40.246 "trid": { 00:28:40.246 "trtype": "TCP", 00:28:40.246 "adrfam": "IPv4", 00:28:40.246 "traddr": "10.0.0.2", 00:28:40.246 "trsvcid": "4420", 00:28:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:40.246 }, 00:28:40.246 "ctrlr_data": { 00:28:40.246 "cntlid": 2, 00:28:40.246 "vendor_id": "0x8086", 00:28:40.246 "model_number": "SPDK bdev Controller", 00:28:40.246 "serial_number": "00000000000000000000", 00:28:40.246 "firmware_revision": "24.09", 00:28:40.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.246 "oacs": { 00:28:40.246 "security": 0, 00:28:40.246 "format": 0, 00:28:40.246 "firmware": 0, 00:28:40.246 "ns_manage": 0 00:28:40.246 }, 00:28:40.246 "multi_ctrlr": true, 00:28:40.246 "ana_reporting": false 00:28:40.246 }, 00:28:40.246 "vs": { 00:28:40.246 "nvme_version": "1.3" 00:28:40.246 }, 00:28:40.246 "ns_data": { 00:28:40.246 "id": 1, 00:28:40.246 "can_share": true 00:28:40.246 } 00:28:40.246 } 00:28:40.246 ], 00:28:40.246 "mp_policy": "active_passive" 00:28:40.246 } 00:28:40.246 } 00:28:40.246 ] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6OxowJkC3w 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6OxowJkC3w 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 [2024-07-23 18:16:47.790882] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:40.246 [2024-07-23 18:16:47.790983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6OxowJkC3w 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 [2024-07-23 18:16:47.798899] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6OxowJkC3w 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 [2024-07-23 18:16:47.806921] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:40.246 [2024-07-23 18:16:47.806967] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:40.246 nvme0n1 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.246 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.246 [ 00:28:40.246 { 00:28:40.246 "name": "nvme0n1", 00:28:40.246 "aliases": [ 00:28:40.246 "18d7c2c0-989e-4fcc-bd07-91993c784fe4" 00:28:40.246 ], 00:28:40.246 "product_name": "NVMe disk", 00:28:40.246 "block_size": 512, 00:28:40.246 "num_blocks": 2097152, 00:28:40.246 "uuid": "18d7c2c0-989e-4fcc-bd07-91993c784fe4", 00:28:40.246 "assigned_rate_limits": { 00:28:40.246 "rw_ios_per_sec": 0, 00:28:40.246 "rw_mbytes_per_sec": 0, 00:28:40.246 "r_mbytes_per_sec": 0, 00:28:40.246 "w_mbytes_per_sec": 0 00:28:40.246 }, 00:28:40.246 "claimed": false, 00:28:40.246 "zoned": false, 00:28:40.246 "supported_io_types": { 00:28:40.246 "read": true, 00:28:40.246 "write": true, 00:28:40.247 "unmap": false, 00:28:40.247 "flush": true, 00:28:40.247 "reset": true, 00:28:40.247 "nvme_admin": true, 00:28:40.247 "nvme_io": true, 00:28:40.247 "nvme_io_md": false, 00:28:40.247 "write_zeroes": true, 00:28:40.247 "zcopy": false, 00:28:40.247 "get_zone_info": false, 00:28:40.247 "zone_management": false, 00:28:40.247 "zone_append": false, 00:28:40.247 "compare": true, 00:28:40.247 "compare_and_write": true, 00:28:40.247 "abort": true, 00:28:40.247 "seek_hole": false, 00:28:40.247 "seek_data": false, 00:28:40.247 "copy": true, 00:28:40.247 "nvme_iov_md": false 00:28:40.247 }, 00:28:40.247 "memory_domains": [ 00:28:40.247 { 00:28:40.247 "dma_device_id": "system", 00:28:40.247 "dma_device_type": 1 00:28:40.247 } 00:28:40.247 ], 00:28:40.247 "driver_specific": { 00:28:40.247 "nvme": [ 00:28:40.247 { 00:28:40.247 "trid": { 00:28:40.247 "trtype": "TCP", 00:28:40.247 "adrfam": "IPv4", 00:28:40.247 "traddr": "10.0.0.2", 00:28:40.247 "trsvcid": "4421", 00:28:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:40.247 }, 00:28:40.247 "ctrlr_data": { 00:28:40.247 "cntlid": 3, 00:28:40.247 "vendor_id": "0x8086", 00:28:40.247 "model_number": "SPDK bdev Controller", 00:28:40.247 "serial_number": "00000000000000000000", 00:28:40.247 "firmware_revision": "24.09", 00:28:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.247 "oacs": { 00:28:40.247 "security": 0, 00:28:40.247 "format": 0, 00:28:40.247 "firmware": 0, 00:28:40.247 "ns_manage": 0 00:28:40.247 }, 00:28:40.247 "multi_ctrlr": true, 00:28:40.247 "ana_reporting": false 00:28:40.247 }, 00:28:40.247 "vs": { 00:28:40.247 "nvme_version": "1.3" 00:28:40.247 }, 00:28:40.247 "ns_data": { 00:28:40.247 "id": 1, 00:28:40.247 "can_share": true 00:28:40.247 } 00:28:40.247 } 00:28:40.247 ], 00:28:40.247 "mp_policy": "active_passive" 00:28:40.247 } 00:28:40.247 } 00:28:40.247 ] 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6OxowJkC3w 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.247 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.505 rmmod nvme_tcp 00:28:40.505 rmmod nvme_fabrics 00:28:40.505 rmmod nvme_keyring 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2440490 ']' 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2440490 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2440490 ']' 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2440490 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2440490 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2440490' 00:28:40.505 killing process with pid 2440490 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2440490 00:28:40.505 [2024-07-23 18:16:47.982523] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:40.505 [2024-07-23 18:16:47.982584] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:40.505 18:16:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2440490 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.762 18:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.662 00:28:42.662 real 0m5.525s 00:28:42.662 user 0m2.019s 00:28:42.662 sys 0m1.869s 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.662 ************************************ 00:28:42.662 END TEST nvmf_async_init 00:28:42.662 ************************************ 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.662 ************************************ 00:28:42.662 START TEST dma 00:28:42.662 ************************************ 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:42.662 * Looking for test storage... 00:28:42.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.662 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:42.920 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:42.921 00:28:42.921 real 0m0.061s 00:28:42.921 user 0m0.024s 00:28:42.921 sys 0m0.042s 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:42.921 ************************************ 00:28:42.921 END TEST dma 00:28:42.921 ************************************ 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.921 ************************************ 00:28:42.921 START TEST nvmf_identify 00:28:42.921 ************************************ 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:42.921 * Looking for test storage... 00:28:42.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.921 18:16:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:44.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:44.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:44.822 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:44.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:44.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.823 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:28:45.081 00:28:45.081 --- 10.0.0.2 ping statistics --- 00:28:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.081 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:28:45.081 00:28:45.081 --- 10.0.0.1 ping statistics --- 00:28:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.081 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2442616 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2442616 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2442616 ']' 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.081 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.081 [2024-07-23 18:16:52.636148] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:45.081 [2024-07-23 18:16:52.636218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.081 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.081 [2024-07-23 18:16:52.699378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.339 [2024-07-23 18:16:52.783672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.339 [2024-07-23 18:16:52.783727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.339 [2024-07-23 18:16:52.783751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.339 [2024-07-23 18:16:52.783762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.339 [2024-07-23 18:16:52.783771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.339 [2024-07-23 18:16:52.783829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.339 [2024-07-23 18:16:52.783886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.339 [2024-07-23 18:16:52.784000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.339 [2024-07-23 18:16:52.784003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.339 [2024-07-23 18:16:52.904442] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.339 Malloc0 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.339 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.340 [2024-07-23 18:16:52.975376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:45.340 [ 00:28:45.340 { 00:28:45.340 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:45.340 "subtype": "Discovery", 00:28:45.340 "listen_addresses": [ 00:28:45.340 { 00:28:45.340 "trtype": "TCP", 00:28:45.340 "adrfam": "IPv4", 00:28:45.340 "traddr": "10.0.0.2", 00:28:45.340 "trsvcid": "4420" 00:28:45.340 } 00:28:45.340 ], 00:28:45.340 "allow_any_host": true, 00:28:45.340 "hosts": [] 00:28:45.340 }, 00:28:45.340 { 00:28:45.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.340 "subtype": "NVMe", 00:28:45.340 "listen_addresses": [ 00:28:45.340 { 00:28:45.340 "trtype": "TCP", 00:28:45.340 "adrfam": "IPv4", 00:28:45.340 "traddr": "10.0.0.2", 00:28:45.340 "trsvcid": "4420" 00:28:45.340 } 00:28:45.340 ], 00:28:45.340 "allow_any_host": true, 00:28:45.340 "hosts": [], 00:28:45.340 "serial_number": "SPDK00000000000001", 00:28:45.340 "model_number": "SPDK bdev Controller", 00:28:45.340 "max_namespaces": 32, 00:28:45.340 "min_cntlid": 1, 00:28:45.340 "max_cntlid": 65519, 00:28:45.340 "namespaces": [ 00:28:45.340 { 00:28:45.340 "nsid": 1, 00:28:45.340 "bdev_name": "Malloc0", 00:28:45.340 "name": "Malloc0", 00:28:45.340 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:45.340 "eui64": "ABCDEF0123456789", 00:28:45.340 "uuid": "19ec8a3a-f7fc-4efe-93ad-1f078286f8a1" 00:28:45.340 } 00:28:45.340 ] 00:28:45.340 } 00:28:45.340 ] 00:28:45.340 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.599 18:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:45.599 [2024-07-23 18:16:53.012971] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:45.599 [2024-07-23 18:16:53.013011] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442639 ] 00:28:45.599 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.599 [2024-07-23 18:16:53.046629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:45.599 [2024-07-23 18:16:53.046687] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:45.599 [2024-07-23 18:16:53.046696] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:45.599 [2024-07-23 18:16:53.046711] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:45.599 [2024-07-23 18:16:53.046724] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:45.599 [2024-07-23 18:16:53.047007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:45.599 [2024-07-23 18:16:53.047055] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4b3fe0 0 00:28:45.599 [2024-07-23 18:16:53.061326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:45.599 [2024-07-23 18:16:53.061354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:45.599 [2024-07-23 18:16:53.061365] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:45.599 [2024-07-23 18:16:53.061372] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:45.599 [2024-07-23 18:16:53.061427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.599 [2024-07-23 18:16:53.061441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.599 [2024-07-23 18:16:53.061449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.599 [2024-07-23 18:16:53.061469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:45.599 [2024-07-23 18:16:53.061496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.599 [2024-07-23 18:16:53.068332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.599 [2024-07-23 18:16:53.068351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.599 [2024-07-23 18:16:53.068359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.599 [2024-07-23 18:16:53.068367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.599 [2024-07-23 18:16:53.068383] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:45.599 [2024-07-23 18:16:53.068397] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:45.600 [2024-07-23 18:16:53.068407] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:45.600 [2024-07-23 18:16:53.068429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.068460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.068484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.068635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.068652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.068658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.068683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:45.600 [2024-07-23 18:16:53.068698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:45.600 [2024-07-23 18:16:53.068711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.068741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.068763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.068854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.068870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.068877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.068893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:45.600 [2024-07-23 18:16:53.068909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.068924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.068938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.068949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.068971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.069078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.069094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.069101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.069120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.069138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.069168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.069195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.069302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.069325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.069336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.069354] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:45.600 [2024-07-23 18:16:53.069362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.069376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.069489] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:45.600 [2024-07-23 18:16:53.069499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.069514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.069540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.069577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.069742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.069758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.069765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.069780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:45.600 [2024-07-23 18:16:53.069798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.069826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.069848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.069933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.069948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.069955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.069962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.069973] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:45.600 [2024-07-23 18:16:53.069982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:45.600 [2024-07-23 18:16:53.069996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:45.600 [2024-07-23 18:16:53.070014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:45.600 [2024-07-23 18:16:53.070035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.070043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.600 [2024-07-23 18:16:53.070054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.600 [2024-07-23 18:16:53.070076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.600 [2024-07-23 18:16:53.070209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.600 [2024-07-23 18:16:53.070230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.600 [2024-07-23 18:16:53.070241] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.070248] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b3fe0): datao=0, datal=4096, cccid=0 00:28:45.600 [2024-07-23 18:16:53.070257] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51a880) on tqpair(0x4b3fe0): expected_datao=0, payload_size=4096 00:28:45.600 [2024-07-23 18:16:53.070265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.070283] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.070293] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.110449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.600 [2024-07-23 18:16:53.110470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.600 [2024-07-23 18:16:53.110479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.600 [2024-07-23 18:16:53.110489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.600 [2024-07-23 18:16:53.110502] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:45.601 [2024-07-23 18:16:53.110511] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:45.601 [2024-07-23 18:16:53.110519] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:45.601 [2024-07-23 18:16:53.110529] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:45.601 [2024-07-23 18:16:53.110537] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:45.601 [2024-07-23 18:16:53.110545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:45.601 [2024-07-23 18:16:53.110562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:45.601 [2024-07-23 18:16:53.110583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.110611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:45.601 [2024-07-23 18:16:53.110635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.601 [2024-07-23 18:16:53.110732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.601 [2024-07-23 18:16:53.110748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.601 [2024-07-23 18:16:53.110755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.601 [2024-07-23 18:16:53.110785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.110811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.601 [2024-07-23 18:16:53.110822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.110844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.601 [2024-07-23 18:16:53.110854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.110876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.601 [2024-07-23 18:16:53.110886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.110922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.601 [2024-07-23 18:16:53.110932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:45.601 [2024-07-23 18:16:53.110953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:45.601 [2024-07-23 18:16:53.110968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.110989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.111000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.601 [2024-07-23 18:16:53.111022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51a880, cid 0, qid 0 00:28:45.601 [2024-07-23 18:16:53.111033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51aa00, cid 1, qid 0 00:28:45.601 [2024-07-23 18:16:53.111056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ab80, cid 2, qid 0 00:28:45.601 [2024-07-23 18:16:53.111064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.601 [2024-07-23 18:16:53.111072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ae80, cid 4, qid 0 00:28:45.601 [2024-07-23 18:16:53.111230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.601 [2024-07-23 18:16:53.111246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.601 [2024-07-23 18:16:53.111253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ae80) on tqpair=0x4b3fe0 00:28:45.601 [2024-07-23 18:16:53.111269] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:45.601 [2024-07-23 18:16:53.111278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:45.601 [2024-07-23 18:16:53.111298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.111335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.601 [2024-07-23 18:16:53.111359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ae80, cid 4, qid 0 00:28:45.601 [2024-07-23 18:16:53.111484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.601 [2024-07-23 18:16:53.111499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.601 [2024-07-23 18:16:53.111506] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111513] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b3fe0): datao=0, datal=4096, cccid=4 00:28:45.601 [2024-07-23 18:16:53.111521] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51ae80) on tqpair(0x4b3fe0): expected_datao=0, payload_size=4096 00:28:45.601 [2024-07-23 18:16:53.111534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111560] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111572] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.601 [2024-07-23 18:16:53.111624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.601 [2024-07-23 18:16:53.111631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ae80) on tqpair=0x4b3fe0 00:28:45.601 [2024-07-23 18:16:53.111658] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:45.601 [2024-07-23 18:16:53.111701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.111723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.601 [2024-07-23 18:16:53.111735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b3fe0) 00:28:45.601 [2024-07-23 18:16:53.111758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.601 [2024-07-23 18:16:53.111788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ae80, cid 4, qid 0 00:28:45.601 [2024-07-23 18:16:53.111800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51b000, cid 5, qid 0 00:28:45.601 [2024-07-23 18:16:53.111932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.601 [2024-07-23 18:16:53.111948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.601 [2024-07-23 18:16:53.111955] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111962] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b3fe0): datao=0, datal=1024, cccid=4 00:28:45.601 [2024-07-23 18:16:53.111969] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51ae80) on tqpair(0x4b3fe0): expected_datao=0, payload_size=1024 00:28:45.601 [2024-07-23 18:16:53.111977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111986] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.111994] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.112002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.601 [2024-07-23 18:16:53.112011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.601 [2024-07-23 18:16:53.112022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.112030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51b000) on tqpair=0x4b3fe0 00:28:45.601 [2024-07-23 18:16:53.156332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.601 [2024-07-23 18:16:53.156352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.601 [2024-07-23 18:16:53.156359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.601 [2024-07-23 18:16:53.156366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ae80) on tqpair=0x4b3fe0 00:28:45.602 [2024-07-23 18:16:53.156385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.156395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b3fe0) 00:28:45.602 [2024-07-23 18:16:53.156406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.602 [2024-07-23 18:16:53.156439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ae80, cid 4, qid 0 00:28:45.602 [2024-07-23 18:16:53.156591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.602 [2024-07-23 18:16:53.156611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.602 [2024-07-23 18:16:53.156621] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.156628] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b3fe0): datao=0, datal=3072, cccid=4 00:28:45.602 [2024-07-23 18:16:53.156636] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51ae80) on tqpair(0x4b3fe0): expected_datao=0, payload_size=3072 00:28:45.602 [2024-07-23 18:16:53.156644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.156665] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.156675] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.602 [2024-07-23 18:16:53.197503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.602 [2024-07-23 18:16:53.197512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ae80) on tqpair=0x4b3fe0 00:28:45.602 [2024-07-23 18:16:53.197537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b3fe0) 00:28:45.602 [2024-07-23 18:16:53.197559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.602 [2024-07-23 18:16:53.197591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ae80, cid 4, qid 0 00:28:45.602 [2024-07-23 18:16:53.197697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.602 [2024-07-23 18:16:53.197713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.602 [2024-07-23 18:16:53.197720] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197727] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b3fe0): datao=0, datal=8, cccid=4 00:28:45.602 [2024-07-23 18:16:53.197734] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51ae80) on tqpair(0x4b3fe0): expected_datao=0, payload_size=8 00:28:45.602 [2024-07-23 18:16:53.197742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197752] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.197759] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.238461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.602 [2024-07-23 18:16:53.238481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.602 [2024-07-23 18:16:53.238489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.602 [2024-07-23 18:16:53.238501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ae80) on tqpair=0x4b3fe0 00:28:45.602 ===================================================== 00:28:45.602 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:45.602 ===================================================== 00:28:45.602 Controller Capabilities/Features 00:28:45.602 ================================ 00:28:45.602 Vendor ID: 0000 00:28:45.602 Subsystem Vendor ID: 0000 00:28:45.602 Serial Number: .................... 00:28:45.602 Model Number: ........................................ 00:28:45.602 Firmware Version: 24.09 00:28:45.602 Recommended Arb Burst: 0 00:28:45.602 IEEE OUI Identifier: 00 00 00 00:28:45.602 Multi-path I/O 00:28:45.602 May have multiple subsystem ports: No 00:28:45.602 May have multiple controllers: No 00:28:45.602 Associated with SR-IOV VF: No 00:28:45.602 Max Data Transfer Size: 131072 00:28:45.602 Max Number of Namespaces: 0 00:28:45.602 Max Number of I/O Queues: 1024 00:28:45.602 NVMe Specification Version (VS): 1.3 00:28:45.602 NVMe Specification Version (Identify): 1.3 00:28:45.602 Maximum Queue Entries: 128 00:28:45.602 Contiguous Queues Required: Yes 00:28:45.602 Arbitration Mechanisms Supported 00:28:45.602 Weighted Round Robin: Not Supported 00:28:45.602 Vendor Specific: Not Supported 00:28:45.602 Reset Timeout: 15000 ms 00:28:45.602 Doorbell Stride: 4 bytes 00:28:45.602 NVM Subsystem Reset: Not Supported 00:28:45.602 Command Sets Supported 00:28:45.602 NVM Command Set: Supported 00:28:45.602 Boot Partition: Not Supported 00:28:45.602 Memory Page Size Minimum: 4096 bytes 00:28:45.602 Memory Page Size Maximum: 4096 bytes 00:28:45.602 Persistent Memory Region: Not Supported 00:28:45.602 Optional Asynchronous Events Supported 00:28:45.602 Namespace Attribute Notices: Not Supported 00:28:45.602 Firmware Activation Notices: Not Supported 00:28:45.602 ANA Change Notices: Not Supported 00:28:45.602 PLE Aggregate Log Change Notices: Not Supported 00:28:45.602 LBA Status Info Alert Notices: Not Supported 00:28:45.602 EGE Aggregate Log Change Notices: Not Supported 00:28:45.602 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.602 Zone Descriptor Change Notices: Not Supported 00:28:45.602 Discovery Log Change Notices: Supported 00:28:45.602 Controller Attributes 00:28:45.602 128-bit Host Identifier: Not Supported 00:28:45.602 Non-Operational Permissive Mode: Not Supported 00:28:45.602 NVM Sets: Not Supported 00:28:45.602 Read Recovery Levels: Not Supported 00:28:45.602 Endurance Groups: Not Supported 00:28:45.602 Predictable Latency Mode: Not Supported 00:28:45.602 Traffic Based Keep ALive: Not Supported 00:28:45.602 Namespace Granularity: Not Supported 00:28:45.602 SQ Associations: Not Supported 00:28:45.602 UUID List: Not Supported 00:28:45.602 Multi-Domain Subsystem: Not Supported 00:28:45.602 Fixed Capacity Management: Not Supported 00:28:45.602 Variable Capacity Management: Not Supported 00:28:45.602 Delete Endurance Group: Not Supported 00:28:45.602 Delete NVM Set: Not Supported 00:28:45.602 Extended LBA Formats Supported: Not Supported 00:28:45.602 Flexible Data Placement Supported: Not Supported 00:28:45.602 00:28:45.602 Controller Memory Buffer Support 00:28:45.602 ================================ 00:28:45.602 Supported: No 00:28:45.602 00:28:45.602 Persistent Memory Region Support 00:28:45.602 ================================ 00:28:45.602 Supported: No 00:28:45.602 00:28:45.602 Admin Command Set Attributes 00:28:45.602 ============================ 00:28:45.602 Security Send/Receive: Not Supported 00:28:45.602 Format NVM: Not Supported 00:28:45.602 Firmware Activate/Download: Not Supported 00:28:45.602 Namespace Management: Not Supported 00:28:45.602 Device Self-Test: Not Supported 00:28:45.602 Directives: Not Supported 00:28:45.602 NVMe-MI: Not Supported 00:28:45.602 Virtualization Management: Not Supported 00:28:45.602 Doorbell Buffer Config: Not Supported 00:28:45.602 Get LBA Status Capability: Not Supported 00:28:45.602 Command & Feature Lockdown Capability: Not Supported 00:28:45.602 Abort Command Limit: 1 00:28:45.602 Async Event Request Limit: 4 00:28:45.602 Number of Firmware Slots: N/A 00:28:45.602 Firmware Slot 1 Read-Only: N/A 00:28:45.602 Firmware Activation Without Reset: N/A 00:28:45.602 Multiple Update Detection Support: N/A 00:28:45.602 Firmware Update Granularity: No Information Provided 00:28:45.602 Per-Namespace SMART Log: No 00:28:45.602 Asymmetric Namespace Access Log Page: Not Supported 00:28:45.602 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:45.602 Command Effects Log Page: Not Supported 00:28:45.602 Get Log Page Extended Data: Supported 00:28:45.602 Telemetry Log Pages: Not Supported 00:28:45.602 Persistent Event Log Pages: Not Supported 00:28:45.603 Supported Log Pages Log Page: May Support 00:28:45.603 Commands Supported & Effects Log Page: Not Supported 00:28:45.603 Feature Identifiers & Effects Log Page:May Support 00:28:45.603 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.603 Data Area 4 for Telemetry Log: Not Supported 00:28:45.603 Error Log Page Entries Supported: 128 00:28:45.603 Keep Alive: Not Supported 00:28:45.603 00:28:45.603 NVM Command Set Attributes 00:28:45.603 ========================== 00:28:45.603 Submission Queue Entry Size 00:28:45.603 Max: 1 00:28:45.603 Min: 1 00:28:45.603 Completion Queue Entry Size 00:28:45.603 Max: 1 00:28:45.603 Min: 1 00:28:45.603 Number of Namespaces: 0 00:28:45.603 Compare Command: Not Supported 00:28:45.603 Write Uncorrectable Command: Not Supported 00:28:45.603 Dataset Management Command: Not Supported 00:28:45.603 Write Zeroes Command: Not Supported 00:28:45.603 Set Features Save Field: Not Supported 00:28:45.603 Reservations: Not Supported 00:28:45.603 Timestamp: Not Supported 00:28:45.603 Copy: Not Supported 00:28:45.603 Volatile Write Cache: Not Present 00:28:45.603 Atomic Write Unit (Normal): 1 00:28:45.603 Atomic Write Unit (PFail): 1 00:28:45.603 Atomic Compare & Write Unit: 1 00:28:45.603 Fused Compare & Write: Supported 00:28:45.603 Scatter-Gather List 00:28:45.603 SGL Command Set: Supported 00:28:45.603 SGL Keyed: Supported 00:28:45.603 SGL Bit Bucket Descriptor: Not Supported 00:28:45.603 SGL Metadata Pointer: Not Supported 00:28:45.603 Oversized SGL: Not Supported 00:28:45.603 SGL Metadata Address: Not Supported 00:28:45.603 SGL Offset: Supported 00:28:45.603 Transport SGL Data Block: Not Supported 00:28:45.603 Replay Protected Memory Block: Not Supported 00:28:45.603 00:28:45.603 Firmware Slot Information 00:28:45.603 ========================= 00:28:45.603 Active slot: 0 00:28:45.603 00:28:45.603 00:28:45.603 Error Log 00:28:45.603 ========= 00:28:45.603 00:28:45.603 Active Namespaces 00:28:45.603 ================= 00:28:45.603 Discovery Log Page 00:28:45.603 ================== 00:28:45.603 Generation Counter: 2 00:28:45.603 Number of Records: 2 00:28:45.603 Record Format: 0 00:28:45.603 00:28:45.603 Discovery Log Entry 0 00:28:45.603 ---------------------- 00:28:45.603 Transport Type: 3 (TCP) 00:28:45.603 Address Family: 1 (IPv4) 00:28:45.603 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:45.603 Entry Flags: 00:28:45.603 Duplicate Returned Information: 1 00:28:45.603 Explicit Persistent Connection Support for Discovery: 1 00:28:45.603 Transport Requirements: 00:28:45.603 Secure Channel: Not Required 00:28:45.603 Port ID: 0 (0x0000) 00:28:45.603 Controller ID: 65535 (0xffff) 00:28:45.603 Admin Max SQ Size: 128 00:28:45.603 Transport Service Identifier: 4420 00:28:45.603 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:45.603 Transport Address: 10.0.0.2 00:28:45.603 Discovery Log Entry 1 00:28:45.603 ---------------------- 00:28:45.603 Transport Type: 3 (TCP) 00:28:45.603 Address Family: 1 (IPv4) 00:28:45.603 Subsystem Type: 2 (NVM Subsystem) 00:28:45.603 Entry Flags: 00:28:45.603 Duplicate Returned Information: 0 00:28:45.603 Explicit Persistent Connection Support for Discovery: 0 00:28:45.603 Transport Requirements: 00:28:45.603 Secure Channel: Not Required 00:28:45.603 Port ID: 0 (0x0000) 00:28:45.603 Controller ID: 65535 (0xffff) 00:28:45.603 Admin Max SQ Size: 128 00:28:45.603 Transport Service Identifier: 4420 00:28:45.603 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:45.603 Transport Address: 10.0.0.2 [2024-07-23 18:16:53.238618] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:45.603 [2024-07-23 18:16:53.238643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51a880) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.238658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.603 [2024-07-23 18:16:53.238668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51aa00) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.238676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.603 [2024-07-23 18:16:53.238684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ab80) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.238692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.603 [2024-07-23 18:16:53.238700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.238708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.603 [2024-07-23 18:16:53.238726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.238750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.238757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.603 [2024-07-23 18:16:53.238768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.603 [2024-07-23 18:16:53.238792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.603 [2024-07-23 18:16:53.238982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.603 [2024-07-23 18:16:53.238999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.603 [2024-07-23 18:16:53.239006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.239028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.603 [2024-07-23 18:16:53.239054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.603 [2024-07-23 18:16:53.239084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.603 [2024-07-23 18:16:53.239183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.603 [2024-07-23 18:16:53.239199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.603 [2024-07-23 18:16:53.239205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.239225] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:45.603 [2024-07-23 18:16:53.239234] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:45.603 [2024-07-23 18:16:53.239251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.603 [2024-07-23 18:16:53.239282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.603 [2024-07-23 18:16:53.239309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.603 [2024-07-23 18:16:53.239401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.603 [2024-07-23 18:16:53.239416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.603 [2024-07-23 18:16:53.239423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.603 [2024-07-23 18:16:53.239452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.603 [2024-07-23 18:16:53.239469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.239480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.239505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.239593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.239609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.239616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.239641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.239670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.239692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.239782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.239797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.239804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.239832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.239848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.239860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.239884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.239971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.239986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.239993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.240019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.240047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.240070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.240153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.240168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.240175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.240202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.240219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.240230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.240254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.244346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.244363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.244370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.244377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.244395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.244406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.244413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b3fe0) 00:28:45.604 [2024-07-23 18:16:53.244424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.604 [2024-07-23 18:16:53.244446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51ad00, cid 3, qid 0 00:28:45.604 [2024-07-23 18:16:53.244573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.604 [2024-07-23 18:16:53.244590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.604 [2024-07-23 18:16:53.244596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.604 [2024-07-23 18:16:53.244604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51ad00) on tqpair=0x4b3fe0 00:28:45.604 [2024-07-23 18:16:53.244621] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:45.870 00:28:45.870 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:45.870 [2024-07-23 18:16:53.282513] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:45.870 [2024-07-23 18:16:53.282566] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442686 ] 00:28:45.870 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.870 [2024-07-23 18:16:53.320062] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:45.870 [2024-07-23 18:16:53.320108] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:45.870 [2024-07-23 18:16:53.320117] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:45.870 [2024-07-23 18:16:53.320130] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:45.870 [2024-07-23 18:16:53.320141] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:45.870 [2024-07-23 18:16:53.320383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:45.870 [2024-07-23 18:16:53.320421] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f14fe0 0 00:28:45.870 [2024-07-23 18:16:53.335328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:45.870 [2024-07-23 18:16:53.335350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:45.870 [2024-07-23 18:16:53.335359] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:45.870 [2024-07-23 18:16:53.335365] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:45.870 [2024-07-23 18:16:53.335403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.870 [2024-07-23 18:16:53.335414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.870 [2024-07-23 18:16:53.335420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.870 [2024-07-23 18:16:53.335434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:45.870 [2024-07-23 18:16:53.335459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.870 [2024-07-23 18:16:53.343332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.870 [2024-07-23 18:16:53.343356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.870 [2024-07-23 18:16:53.343363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.870 [2024-07-23 18:16:53.343371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.870 [2024-07-23 18:16:53.343389] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:45.870 [2024-07-23 18:16:53.343400] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:45.870 [2024-07-23 18:16:53.343410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:45.870 [2024-07-23 18:16:53.343432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.870 [2024-07-23 18:16:53.343441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.870 [2024-07-23 18:16:53.343448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.870 [2024-07-23 18:16:53.343460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.870 [2024-07-23 18:16:53.343483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.870 [2024-07-23 18:16:53.343604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.343617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.343623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.343642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:45.871 [2024-07-23 18:16:53.343655] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:45.871 [2024-07-23 18:16:53.343668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.343693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.343714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.343791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.343807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.343815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.343830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:45.871 [2024-07-23 18:16:53.343844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.343856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.343870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.343881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.343901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.343985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.343997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.344004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.344018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.344034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.344060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.344081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.344161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.344173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.344180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.344193] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:45.871 [2024-07-23 18:16:53.344202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.344215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.344325] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:45.871 [2024-07-23 18:16:53.344334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.344346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.344371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.344397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.344513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.344525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.344531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.344546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:45.871 [2024-07-23 18:16:53.344562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.344588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.344609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.344705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.344719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.344726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.344740] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:45.871 [2024-07-23 18:16:53.344748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:45.871 [2024-07-23 18:16:53.344761] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:45.871 [2024-07-23 18:16:53.344775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:45.871 [2024-07-23 18:16:53.344788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.344807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.871 [2024-07-23 18:16:53.344828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.871 [2024-07-23 18:16:53.344969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.871 [2024-07-23 18:16:53.344984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.871 [2024-07-23 18:16:53.344991] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.344997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=4096, cccid=0 00:28:45.871 [2024-07-23 18:16:53.345005] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b880) on tqpair(0x1f14fe0): expected_datao=0, payload_size=4096 00:28:45.871 [2024-07-23 18:16:53.345012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.345023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.345030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.385484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.871 [2024-07-23 18:16:53.385503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.871 [2024-07-23 18:16:53.385511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.385518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.871 [2024-07-23 18:16:53.385533] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:45.871 [2024-07-23 18:16:53.385542] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:45.871 [2024-07-23 18:16:53.385550] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:45.871 [2024-07-23 18:16:53.385556] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:45.871 [2024-07-23 18:16:53.385564] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:45.871 [2024-07-23 18:16:53.385572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:45.871 [2024-07-23 18:16:53.385586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:45.871 [2024-07-23 18:16:53.385603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.385612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.871 [2024-07-23 18:16:53.385618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.871 [2024-07-23 18:16:53.385630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:45.871 [2024-07-23 18:16:53.385654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.872 [2024-07-23 18:16:53.385743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.872 [2024-07-23 18:16:53.385758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.872 [2024-07-23 18:16:53.385764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.872 [2024-07-23 18:16:53.385781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.385805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.872 [2024-07-23 18:16:53.385815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.385837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.872 [2024-07-23 18:16:53.385846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.385868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.872 [2024-07-23 18:16:53.385877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.385899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.872 [2024-07-23 18:16:53.385908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.385943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.385957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.385964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.385974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.872 [2024-07-23 18:16:53.385996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 0, qid 0 00:28:45.872 [2024-07-23 18:16:53.386023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7ba00, cid 1, qid 0 00:28:45.872 [2024-07-23 18:16:53.386031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bb80, cid 2, qid 0 00:28:45.872 [2024-07-23 18:16:53.386038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.872 [2024-07-23 18:16:53.386045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.872 [2024-07-23 18:16:53.386247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.872 [2024-07-23 18:16:53.386262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.872 [2024-07-23 18:16:53.386269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.872 [2024-07-23 18:16:53.386284] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:45.872 [2024-07-23 18:16:53.386292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.386311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.386333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.386345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.386369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:45.872 [2024-07-23 18:16:53.386407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.872 [2024-07-23 18:16:53.386558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.872 [2024-07-23 18:16:53.386571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.872 [2024-07-23 18:16:53.386577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.872 [2024-07-23 18:16:53.386655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.386675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.386690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.386709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.872 [2024-07-23 18:16:53.386734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.872 [2024-07-23 18:16:53.386838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.872 [2024-07-23 18:16:53.386850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.872 [2024-07-23 18:16:53.386857] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386863] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=4096, cccid=4 00:28:45.872 [2024-07-23 18:16:53.386871] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be80) on tqpair(0x1f14fe0): expected_datao=0, payload_size=4096 00:28:45.872 [2024-07-23 18:16:53.386878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386895] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.386904] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.872 [2024-07-23 18:16:53.430363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.872 [2024-07-23 18:16:53.430370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.872 [2024-07-23 18:16:53.430400] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:45.872 [2024-07-23 18:16:53.430418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.430435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.430449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.872 [2024-07-23 18:16:53.430467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.872 [2024-07-23 18:16:53.430490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.872 [2024-07-23 18:16:53.430642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.872 [2024-07-23 18:16:53.430658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.872 [2024-07-23 18:16:53.430665] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430671] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=4096, cccid=4 00:28:45.872 [2024-07-23 18:16:53.430678] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be80) on tqpair(0x1f14fe0): expected_datao=0, payload_size=4096 00:28:45.872 [2024-07-23 18:16:53.430686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430703] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.430712] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.471426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.872 [2024-07-23 18:16:53.471444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.872 [2024-07-23 18:16:53.471451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.872 [2024-07-23 18:16:53.471458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.872 [2024-07-23 18:16:53.471482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:45.872 [2024-07-23 18:16:53.471502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.471516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.471528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.471540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.471563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.873 [2024-07-23 18:16:53.471659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.873 [2024-07-23 18:16:53.471674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.873 [2024-07-23 18:16:53.471681] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.471687] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=4096, cccid=4 00:28:45.873 [2024-07-23 18:16:53.471694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be80) on tqpair(0x1f14fe0): expected_datao=0, payload_size=4096 00:28:45.873 [2024-07-23 18:16:53.471702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.471719] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.471728] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.515351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.515358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.515380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515454] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:45.873 [2024-07-23 18:16:53.515462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:45.873 [2024-07-23 18:16:53.515471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:45.873 [2024-07-23 18:16:53.515492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.515512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.515523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.515546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.873 [2024-07-23 18:16:53.515573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.873 [2024-07-23 18:16:53.515588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c000, cid 5, qid 0 00:28:45.873 [2024-07-23 18:16:53.515687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.515699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.515706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.515724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.515733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.515739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c000) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.515761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.515781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.515801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c000, cid 5, qid 0 00:28:45.873 [2024-07-23 18:16:53.515895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.515910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.515917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c000) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.515939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.515948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.515959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.515980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c000, cid 5, qid 0 00:28:45.873 [2024-07-23 18:16:53.516069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.516081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.516088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c000) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.516109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.516128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.516148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c000, cid 5, qid 0 00:28:45.873 [2024-07-23 18:16:53.516246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.873 [2024-07-23 18:16:53.516258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.873 [2024-07-23 18:16:53.516265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c000) on tqpair=0x1f14fe0 00:28:45.873 [2024-07-23 18:16:53.516295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.516326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.516344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.516363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.516374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.516391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.516403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f14fe0) 00:28:45.873 [2024-07-23 18:16:53.516419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-07-23 18:16:53.516458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c000, cid 5, qid 0 00:28:45.873 [2024-07-23 18:16:53.516470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be80, cid 4, qid 0 00:28:45.873 [2024-07-23 18:16:53.516477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c180, cid 6, qid 0 00:28:45.873 [2024-07-23 18:16:53.516485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c300, cid 7, qid 0 00:28:45.873 [2024-07-23 18:16:53.516712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.873 [2024-07-23 18:16:53.516725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.873 [2024-07-23 18:16:53.516732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516738] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=8192, cccid=5 00:28:45.873 [2024-07-23 18:16:53.516746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7c000) on tqpair(0x1f14fe0): expected_datao=0, payload_size=8192 00:28:45.873 [2024-07-23 18:16:53.516753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516770] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516779] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.873 [2024-07-23 18:16:53.516817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.873 [2024-07-23 18:16:53.516823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516829] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=512, cccid=4 00:28:45.873 [2024-07-23 18:16:53.516836] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be80) on tqpair(0x1f14fe0): expected_datao=0, payload_size=512 00:28:45.873 [2024-07-23 18:16:53.516844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.873 [2024-07-23 18:16:53.516852] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516859] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.874 [2024-07-23 18:16:53.516876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.874 [2024-07-23 18:16:53.516882] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516888] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=512, cccid=6 00:28:45.874 [2024-07-23 18:16:53.516895] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7c180) on tqpair(0x1f14fe0): expected_datao=0, payload_size=512 00:28:45.874 [2024-07-23 18:16:53.516905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516915] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516921] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:45.874 [2024-07-23 18:16:53.516938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:45.874 [2024-07-23 18:16:53.516944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516950] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f14fe0): datao=0, datal=4096, cccid=7 00:28:45.874 [2024-07-23 18:16:53.516957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7c300) on tqpair(0x1f14fe0): expected_datao=0, payload_size=4096 00:28:45.874 [2024-07-23 18:16:53.516964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.516991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.874 [2024-07-23 18:16:53.517000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.874 [2024-07-23 18:16:53.517007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.517027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c000) on tqpair=0x1f14fe0 00:28:45.874 [2024-07-23 18:16:53.517045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.874 [2024-07-23 18:16:53.517056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.874 [2024-07-23 18:16:53.517062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.517068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be80) on tqpair=0x1f14fe0 00:28:45.874 [2024-07-23 18:16:53.517084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.874 [2024-07-23 18:16:53.517094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.874 [2024-07-23 18:16:53.517100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.517106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c180) on tqpair=0x1f14fe0 00:28:45.874 [2024-07-23 18:16:53.517116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.874 [2024-07-23 18:16:53.517125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.874 [2024-07-23 18:16:53.517131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.874 [2024-07-23 18:16:53.517137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c300) on tqpair=0x1f14fe0 00:28:45.874 ===================================================== 00:28:45.874 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.874 ===================================================== 00:28:45.874 Controller Capabilities/Features 00:28:45.874 ================================ 00:28:45.874 Vendor ID: 8086 00:28:45.874 Subsystem Vendor ID: 8086 00:28:45.874 Serial Number: SPDK00000000000001 00:28:45.874 Model Number: SPDK bdev Controller 00:28:45.874 Firmware Version: 24.09 00:28:45.874 Recommended Arb Burst: 6 00:28:45.874 IEEE OUI Identifier: e4 d2 5c 00:28:45.874 Multi-path I/O 00:28:45.874 May have multiple subsystem ports: Yes 00:28:45.874 May have multiple controllers: Yes 00:28:45.874 Associated with SR-IOV VF: No 00:28:45.874 Max Data Transfer Size: 131072 00:28:45.874 Max Number of Namespaces: 32 00:28:45.874 Max Number of I/O Queues: 127 00:28:45.874 NVMe Specification Version (VS): 1.3 00:28:45.874 NVMe Specification Version (Identify): 1.3 00:28:45.874 Maximum Queue Entries: 128 00:28:45.874 Contiguous Queues Required: Yes 00:28:45.874 Arbitration Mechanisms Supported 00:28:45.874 Weighted Round Robin: Not Supported 00:28:45.874 Vendor Specific: Not Supported 00:28:45.874 Reset Timeout: 15000 ms 00:28:45.874 Doorbell Stride: 4 bytes 00:28:45.874 NVM Subsystem Reset: Not Supported 00:28:45.874 Command Sets Supported 00:28:45.874 NVM Command Set: Supported 00:28:45.874 Boot Partition: Not Supported 00:28:45.874 Memory Page Size Minimum: 4096 bytes 00:28:45.874 Memory Page Size Maximum: 4096 bytes 00:28:45.874 Persistent Memory Region: Not Supported 00:28:45.874 Optional Asynchronous Events Supported 00:28:45.874 Namespace Attribute Notices: Supported 00:28:45.874 Firmware Activation Notices: Not Supported 00:28:45.874 ANA Change Notices: Not Supported 00:28:45.874 PLE Aggregate Log Change Notices: Not Supported 00:28:45.874 LBA Status Info Alert Notices: Not Supported 00:28:45.874 EGE Aggregate Log Change Notices: Not Supported 00:28:45.874 Normal NVM Subsystem Shutdown event: Not Supported 00:28:45.874 Zone Descriptor Change Notices: Not Supported 00:28:45.874 Discovery Log Change Notices: Not Supported 00:28:45.874 Controller Attributes 00:28:45.874 128-bit Host Identifier: Supported 00:28:45.874 Non-Operational Permissive Mode: Not Supported 00:28:45.874 NVM Sets: Not Supported 00:28:45.874 Read Recovery Levels: Not Supported 00:28:45.874 Endurance Groups: Not Supported 00:28:45.874 Predictable Latency Mode: Not Supported 00:28:45.874 Traffic Based Keep ALive: Not Supported 00:28:45.874 Namespace Granularity: Not Supported 00:28:45.874 SQ Associations: Not Supported 00:28:45.874 UUID List: Not Supported 00:28:45.874 Multi-Domain Subsystem: Not Supported 00:28:45.874 Fixed Capacity Management: Not Supported 00:28:45.874 Variable Capacity Management: Not Supported 00:28:45.874 Delete Endurance Group: Not Supported 00:28:45.874 Delete NVM Set: Not Supported 00:28:45.874 Extended LBA Formats Supported: Not Supported 00:28:45.874 Flexible Data Placement Supported: Not Supported 00:28:45.874 00:28:45.874 Controller Memory Buffer Support 00:28:45.874 ================================ 00:28:45.874 Supported: No 00:28:45.874 00:28:45.874 Persistent Memory Region Support 00:28:45.874 ================================ 00:28:45.874 Supported: No 00:28:45.874 00:28:45.874 Admin Command Set Attributes 00:28:45.874 ============================ 00:28:45.874 Security Send/Receive: Not Supported 00:28:45.874 Format NVM: Not Supported 00:28:45.874 Firmware Activate/Download: Not Supported 00:28:45.874 Namespace Management: Not Supported 00:28:45.874 Device Self-Test: Not Supported 00:28:45.874 Directives: Not Supported 00:28:45.874 NVMe-MI: Not Supported 00:28:45.874 Virtualization Management: Not Supported 00:28:45.874 Doorbell Buffer Config: Not Supported 00:28:45.874 Get LBA Status Capability: Not Supported 00:28:45.874 Command & Feature Lockdown Capability: Not Supported 00:28:45.874 Abort Command Limit: 4 00:28:45.874 Async Event Request Limit: 4 00:28:45.874 Number of Firmware Slots: N/A 00:28:45.874 Firmware Slot 1 Read-Only: N/A 00:28:45.874 Firmware Activation Without Reset: N/A 00:28:45.874 Multiple Update Detection Support: N/A 00:28:45.874 Firmware Update Granularity: No Information Provided 00:28:45.874 Per-Namespace SMART Log: No 00:28:45.874 Asymmetric Namespace Access Log Page: Not Supported 00:28:45.874 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:45.874 Command Effects Log Page: Supported 00:28:45.875 Get Log Page Extended Data: Supported 00:28:45.875 Telemetry Log Pages: Not Supported 00:28:45.875 Persistent Event Log Pages: Not Supported 00:28:45.875 Supported Log Pages Log Page: May Support 00:28:45.875 Commands Supported & Effects Log Page: Not Supported 00:28:45.875 Feature Identifiers & Effects Log Page:May Support 00:28:45.875 NVMe-MI Commands & Effects Log Page: May Support 00:28:45.875 Data Area 4 for Telemetry Log: Not Supported 00:28:45.875 Error Log Page Entries Supported: 128 00:28:45.875 Keep Alive: Supported 00:28:45.875 Keep Alive Granularity: 10000 ms 00:28:45.875 00:28:45.875 NVM Command Set Attributes 00:28:45.875 ========================== 00:28:45.875 Submission Queue Entry Size 00:28:45.875 Max: 64 00:28:45.875 Min: 64 00:28:45.875 Completion Queue Entry Size 00:28:45.875 Max: 16 00:28:45.875 Min: 16 00:28:45.875 Number of Namespaces: 32 00:28:45.875 Compare Command: Supported 00:28:45.875 Write Uncorrectable Command: Not Supported 00:28:45.875 Dataset Management Command: Supported 00:28:45.875 Write Zeroes Command: Supported 00:28:45.875 Set Features Save Field: Not Supported 00:28:45.875 Reservations: Supported 00:28:45.875 Timestamp: Not Supported 00:28:45.875 Copy: Supported 00:28:45.875 Volatile Write Cache: Present 00:28:45.875 Atomic Write Unit (Normal): 1 00:28:45.875 Atomic Write Unit (PFail): 1 00:28:45.875 Atomic Compare & Write Unit: 1 00:28:45.875 Fused Compare & Write: Supported 00:28:45.875 Scatter-Gather List 00:28:45.875 SGL Command Set: Supported 00:28:45.875 SGL Keyed: Supported 00:28:45.875 SGL Bit Bucket Descriptor: Not Supported 00:28:45.875 SGL Metadata Pointer: Not Supported 00:28:45.875 Oversized SGL: Not Supported 00:28:45.875 SGL Metadata Address: Not Supported 00:28:45.875 SGL Offset: Supported 00:28:45.875 Transport SGL Data Block: Not Supported 00:28:45.875 Replay Protected Memory Block: Not Supported 00:28:45.875 00:28:45.875 Firmware Slot Information 00:28:45.875 ========================= 00:28:45.875 Active slot: 1 00:28:45.875 Slot 1 Firmware Revision: 24.09 00:28:45.875 00:28:45.875 00:28:45.875 Commands Supported and Effects 00:28:45.875 ============================== 00:28:45.875 Admin Commands 00:28:45.875 -------------- 00:28:45.875 Get Log Page (02h): Supported 00:28:45.875 Identify (06h): Supported 00:28:45.875 Abort (08h): Supported 00:28:45.875 Set Features (09h): Supported 00:28:45.875 Get Features (0Ah): Supported 00:28:45.875 Asynchronous Event Request (0Ch): Supported 00:28:45.875 Keep Alive (18h): Supported 00:28:45.875 I/O Commands 00:28:45.875 ------------ 00:28:45.875 Flush (00h): Supported LBA-Change 00:28:45.875 Write (01h): Supported LBA-Change 00:28:45.875 Read (02h): Supported 00:28:45.875 Compare (05h): Supported 00:28:45.875 Write Zeroes (08h): Supported LBA-Change 00:28:45.875 Dataset Management (09h): Supported LBA-Change 00:28:45.875 Copy (19h): Supported LBA-Change 00:28:45.875 00:28:45.875 Error Log 00:28:45.875 ========= 00:28:45.875 00:28:45.875 Arbitration 00:28:45.875 =========== 00:28:45.875 Arbitration Burst: 1 00:28:45.875 00:28:45.875 Power Management 00:28:45.875 ================ 00:28:45.875 Number of Power States: 1 00:28:45.875 Current Power State: Power State #0 00:28:45.875 Power State #0: 00:28:45.875 Max Power: 0.00 W 00:28:45.875 Non-Operational State: Operational 00:28:45.875 Entry Latency: Not Reported 00:28:45.875 Exit Latency: Not Reported 00:28:45.875 Relative Read Throughput: 0 00:28:45.875 Relative Read Latency: 0 00:28:45.875 Relative Write Throughput: 0 00:28:45.875 Relative Write Latency: 0 00:28:45.875 Idle Power: Not Reported 00:28:45.875 Active Power: Not Reported 00:28:45.875 Non-Operational Permissive Mode: Not Supported 00:28:45.875 00:28:45.875 Health Information 00:28:45.875 ================== 00:28:45.875 Critical Warnings: 00:28:45.875 Available Spare Space: OK 00:28:45.875 Temperature: OK 00:28:45.875 Device Reliability: OK 00:28:45.875 Read Only: No 00:28:45.875 Volatile Memory Backup: OK 00:28:45.875 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:45.875 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:45.875 Available Spare: 0% 00:28:45.875 Available Spare Threshold: 0% 00:28:45.875 Life Percentage Used:[2024-07-23 18:16:53.517247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.875 [2024-07-23 18:16:53.517258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f14fe0) 00:28:45.875 [2024-07-23 18:16:53.517268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.875 [2024-07-23 18:16:53.517290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c300, cid 7, qid 0 00:28:45.875 [2024-07-23 18:16:53.517431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.875 [2024-07-23 18:16:53.517445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.875 [2024-07-23 18:16:53.517452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.875 [2024-07-23 18:16:53.517459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c300) on tqpair=0x1f14fe0 00:28:45.875 [2024-07-23 18:16:53.517507] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:45.875 [2024-07-23 18:16:53.517526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f14fe0 00:28:45.875 [2024-07-23 18:16:53.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.875 [2024-07-23 18:16:53.517548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7ba00) on tqpair=0x1f14fe0 00:28:45.875 [2024-07-23 18:16:53.517556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.875 [2024-07-23 18:16:53.517564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bb80) on tqpair=0x1f14fe0 00:28:45.875 [2024-07-23 18:16:53.517572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.875 [2024-07-23 18:16:53.517580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.875 [2024-07-23 18:16:53.517588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.876 [2024-07-23 18:16:53.517600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.517608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.517629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.517640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.517661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.517816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.517831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.517838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.517844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.517855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.517863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.517870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.517880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.517906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.518037] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:45.876 [2024-07-23 18:16:53.518044] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:45.876 [2024-07-23 18:16:53.518059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.518085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.518105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.518246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.518273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.518294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.518427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.518454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.518475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.518619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.518647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.518669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.518790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.518837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.518868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.518949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.518964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.518970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.518980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.519011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.519057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.519084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.519167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.519181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.519187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.519210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.519237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.876 [2024-07-23 18:16:53.519258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.876 [2024-07-23 18:16:53.519338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.876 [2024-07-23 18:16:53.519351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.876 [2024-07-23 18:16:53.519359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.876 [2024-07-23 18:16:53.519381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.876 [2024-07-23 18:16:53.519397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.876 [2024-07-23 18:16:53.519408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.519431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.519519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.519535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.519542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.519565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.519592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.519614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.519720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.519734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.519741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.519763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.519805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.519827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.519921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.519935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.519942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.519965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.519980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.519991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.520124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.520137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.520143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.520165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.520191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.520298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.520312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.520327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.520350] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.520376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.520482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.520496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.520503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.520525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.520554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.520680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.520692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.520699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.520720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.520746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.520882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.520894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.520900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.520922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.520938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.520948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.520968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.521055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.521069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.521075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.521082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.521097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.521106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.521113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:45.877 [2024-07-23 18:16:53.521124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.877 [2024-07-23 18:16:53.521145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:45.877 [2024-07-23 18:16:53.521252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:45.877 [2024-07-23 18:16:53.521268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:45.877 [2024-07-23 18:16:53.521275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.521281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:45.877 [2024-07-23 18:16:53.521297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:45.877 [2024-07-23 18:16:53.521307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.136 [2024-07-23 18:16:53.521314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f14fe0) 00:28:46.136 [2024-07-23 18:16:53.525352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-07-23 18:16:53.525380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bd00, cid 3, qid 0 00:28:46.136 [2024-07-23 18:16:53.525516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.136 [2024-07-23 18:16:53.525528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.136 [2024-07-23 18:16:53.525535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.136 [2024-07-23 18:16:53.525542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bd00) on tqpair=0x1f14fe0 00:28:46.136 [2024-07-23 18:16:53.525555] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:46.136 0% 00:28:46.136 Data Units Read: 0 00:28:46.136 Data Units Written: 0 00:28:46.136 Host Read Commands: 0 00:28:46.136 Host Write Commands: 0 00:28:46.136 Controller Busy Time: 0 minutes 00:28:46.136 Power Cycles: 0 00:28:46.136 Power On Hours: 0 hours 00:28:46.136 Unsafe Shutdowns: 0 00:28:46.136 Unrecoverable Media Errors: 0 00:28:46.136 Lifetime Error Log Entries: 0 00:28:46.136 Warning Temperature Time: 0 minutes 00:28:46.136 Critical Temperature Time: 0 minutes 00:28:46.136 00:28:46.136 Number of Queues 00:28:46.137 ================ 00:28:46.137 Number of I/O Submission Queues: 127 00:28:46.137 Number of I/O Completion Queues: 127 00:28:46.137 00:28:46.137 Active Namespaces 00:28:46.137 ================= 00:28:46.137 Namespace ID:1 00:28:46.137 Error Recovery Timeout: Unlimited 00:28:46.137 Command Set Identifier: NVM (00h) 00:28:46.137 Deallocate: Supported 00:28:46.137 Deallocated/Unwritten Error: Not Supported 00:28:46.137 Deallocated Read Value: Unknown 00:28:46.137 Deallocate in Write Zeroes: Not Supported 00:28:46.137 Deallocated Guard Field: 0xFFFF 00:28:46.137 Flush: Supported 00:28:46.137 Reservation: Supported 00:28:46.137 Namespace Sharing Capabilities: Multiple Controllers 00:28:46.137 Size (in LBAs): 131072 (0GiB) 00:28:46.137 Capacity (in LBAs): 131072 (0GiB) 00:28:46.137 Utilization (in LBAs): 131072 (0GiB) 00:28:46.137 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:46.137 EUI64: ABCDEF0123456789 00:28:46.137 UUID: 19ec8a3a-f7fc-4efe-93ad-1f078286f8a1 00:28:46.137 Thin Provisioning: Not Supported 00:28:46.137 Per-NS Atomic Units: Yes 00:28:46.137 Atomic Boundary Size (Normal): 0 00:28:46.137 Atomic Boundary Size (PFail): 0 00:28:46.137 Atomic Boundary Offset: 0 00:28:46.137 Maximum Single Source Range Length: 65535 00:28:46.137 Maximum Copy Length: 65535 00:28:46.137 Maximum Source Range Count: 1 00:28:46.137 NGUID/EUI64 Never Reused: No 00:28:46.137 Namespace Write Protected: No 00:28:46.137 Number of LBA Formats: 1 00:28:46.137 Current LBA Format: LBA Format #00 00:28:46.137 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:46.137 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.137 rmmod nvme_tcp 00:28:46.137 rmmod nvme_fabrics 00:28:46.137 rmmod nvme_keyring 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2442616 ']' 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2442616 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2442616 ']' 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2442616 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2442616 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2442616' 00:28:46.137 killing process with pid 2442616 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2442616 00:28:46.137 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2442616 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.397 18:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.331 00:28:48.331 real 0m5.556s 00:28:48.331 user 0m5.011s 00:28:48.331 sys 0m1.862s 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:48.331 ************************************ 00:28:48.331 END TEST nvmf_identify 00:28:48.331 ************************************ 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.331 18:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.591 ************************************ 00:28:48.591 START TEST nvmf_perf 00:28:48.591 ************************************ 00:28:48.591 18:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:48.591 * Looking for test storage... 00:28:48.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.591 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.592 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:48.592 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:48.592 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:48.592 18:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.497 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:50.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:50.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:50.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:50.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:50.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:28:50.498 00:28:50.498 --- 10.0.0.2 ping statistics --- 00:28:50.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.498 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:50.498 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:50.757 00:28:50.757 --- 10.0.0.1 ping statistics --- 00:28:50.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.757 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2444691 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2444691 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2444691 ']' 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.757 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:50.757 [2024-07-23 18:16:58.237934] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:28:50.757 [2024-07-23 18:16:58.238021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.757 [2024-07-23 18:16:58.308103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.757 [2024-07-23 18:16:58.397251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.757 [2024-07-23 18:16:58.397325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.757 [2024-07-23 18:16:58.397351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.757 [2024-07-23 18:16:58.397361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.757 [2024-07-23 18:16:58.397371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.757 [2024-07-23 18:16:58.397428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.757 [2024-07-23 18:16:58.397486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.758 [2024-07-23 18:16:58.397551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.758 [2024-07-23 18:16:58.397554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:51.016 18:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:54.297 18:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:54.297 18:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:54.297 18:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:54.297 18:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:54.554 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:54.554 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:54.554 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:54.554 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:54.554 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:54.812 [2024-07-23 18:17:02.419787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.812 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.069 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:55.069 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.327 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:55.327 18:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:55.585 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.843 [2024-07-23 18:17:03.415431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.843 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:56.102 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:56.102 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:56.102 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:56.102 18:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:57.472 Initializing NVMe Controllers 00:28:57.472 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:57.472 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:57.472 Initialization complete. Launching workers. 00:28:57.472 ======================================================== 00:28:57.472 Latency(us) 00:28:57.472 Device Information : IOPS MiB/s Average min max 00:28:57.472 PCIE (0000:88:00.0) NSID 1 from core 0: 81864.40 319.78 390.26 39.26 4396.97 00:28:57.472 ======================================================== 00:28:57.472 Total : 81864.40 319.78 390.26 39.26 4396.97 00:28:57.472 00:28:57.472 18:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.472 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.842 Initializing NVMe Controllers 00:28:58.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:58.842 Initialization complete. Launching workers. 00:28:58.842 ======================================================== 00:28:58.842 Latency(us) 00:28:58.842 Device Information : IOPS MiB/s Average min max 00:28:58.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11048.58 148.79 45749.63 00:28:58.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17186.52 6961.97 50877.27 00:28:58.842 ======================================================== 00:28:58.842 Total : 153.00 0.60 13495.73 148.79 50877.27 00:28:58.842 00:28:58.842 18:17:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.842 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.215 Initializing NVMe Controllers 00:29:00.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.215 Initialization complete. Launching workers. 00:29:00.215 ======================================================== 00:29:00.215 Latency(us) 00:29:00.215 Device Information : IOPS MiB/s Average min max 00:29:00.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8553.40 33.41 3740.92 575.34 8189.59 00:29:00.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3899.59 15.23 8220.88 4565.49 17450.99 00:29:00.215 ======================================================== 00:29:00.215 Total : 12453.00 48.64 5143.80 575.34 17450.99 00:29:00.215 00:29:00.215 18:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:00.215 18:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:00.215 18:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:00.215 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.742 Initializing NVMe Controllers 00:29:02.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.742 Controller IO queue size 128, less than required. 00:29:02.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.742 Controller IO queue size 128, less than required. 00:29:02.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:02.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:02.742 Initialization complete. Launching workers. 00:29:02.742 ======================================================== 00:29:02.742 Latency(us) 00:29:02.742 Device Information : IOPS MiB/s Average min max 00:29:02.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1717.85 429.46 75636.54 52656.56 130590.05 00:29:02.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.43 147.61 221292.84 77751.94 343927.23 00:29:02.742 ======================================================== 00:29:02.742 Total : 2308.28 577.07 112893.79 52656.56 343927.23 00:29:02.742 00:29:02.742 18:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:02.742 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.742 No valid NVMe controllers or AIO or URING devices found 00:29:02.742 Initializing NVMe Controllers 00:29:02.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.742 Controller IO queue size 128, less than required. 00:29:02.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.742 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:02.742 Controller IO queue size 128, less than required. 00:29:02.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.742 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:02.742 WARNING: Some requested NVMe devices were skipped 00:29:02.742 18:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:02.742 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.267 Initializing NVMe Controllers 00:29:05.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.267 Controller IO queue size 128, less than required. 00:29:05.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.267 Controller IO queue size 128, less than required. 00:29:05.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:05.267 Initialization complete. Launching workers. 00:29:05.267 00:29:05.267 ==================== 00:29:05.267 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:05.267 TCP transport: 00:29:05.267 polls: 11886 00:29:05.267 idle_polls: 9056 00:29:05.268 sock_completions: 2830 00:29:05.268 nvme_completions: 5465 00:29:05.268 submitted_requests: 8262 00:29:05.268 queued_requests: 1 00:29:05.268 00:29:05.268 ==================== 00:29:05.268 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:05.268 TCP transport: 00:29:05.268 polls: 11989 00:29:05.268 idle_polls: 8404 00:29:05.268 sock_completions: 3585 00:29:05.268 nvme_completions: 5441 00:29:05.268 submitted_requests: 8088 00:29:05.268 queued_requests: 1 00:29:05.268 ======================================================== 00:29:05.268 Latency(us) 00:29:05.268 Device Information : IOPS MiB/s Average min max 00:29:05.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1366.00 341.50 96636.38 69712.86 152657.35 00:29:05.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1360.00 340.00 94906.09 41470.76 127314.11 00:29:05.268 ======================================================== 00:29:05.268 Total : 2725.99 681.50 95773.14 41470.76 152657.35 00:29:05.268 00:29:05.268 18:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:05.268 18:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.268 18:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:05.268 18:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:05.268 18:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=02a6142f-dd20-42c8-baf8-f73aa96498ea 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 02a6142f-dd20-42c8-baf8-f73aa96498ea 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=02a6142f-dd20-42c8-baf8-f73aa96498ea 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:09.446 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:09.447 { 00:29:09.447 "uuid": "02a6142f-dd20-42c8-baf8-f73aa96498ea", 00:29:09.447 "name": "lvs_0", 00:29:09.447 "base_bdev": "Nvme0n1", 00:29:09.447 "total_data_clusters": 238234, 00:29:09.447 "free_clusters": 238234, 00:29:09.447 "block_size": 512, 00:29:09.447 "cluster_size": 4194304 00:29:09.447 } 00:29:09.447 ]' 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="02a6142f-dd20-42c8-baf8-f73aa96498ea") .free_clusters' 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="02a6142f-dd20-42c8-baf8-f73aa96498ea") .cluster_size' 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:09.447 952936 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:09.447 18:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 02a6142f-dd20-42c8-baf8-f73aa96498ea lbd_0 20480 00:29:09.447 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7c23a3c2-2d4a-48e3-8309-4732944418d5 00:29:09.447 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7c23a3c2-2d4a-48e3-8309-4732944418d5 lvs_n_0 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bae5f489-ffa5-45ab-8666-b8a859bc356a 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bae5f489-ffa5-45ab-8666-b8a859bc356a 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=bae5f489-ffa5-45ab-8666-b8a859bc356a 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:10.379 18:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:10.637 { 00:29:10.637 "uuid": "02a6142f-dd20-42c8-baf8-f73aa96498ea", 00:29:10.637 "name": "lvs_0", 00:29:10.637 "base_bdev": "Nvme0n1", 00:29:10.637 "total_data_clusters": 238234, 00:29:10.637 "free_clusters": 233114, 00:29:10.637 "block_size": 512, 00:29:10.637 "cluster_size": 4194304 00:29:10.637 }, 00:29:10.637 { 00:29:10.637 "uuid": "bae5f489-ffa5-45ab-8666-b8a859bc356a", 00:29:10.637 "name": "lvs_n_0", 00:29:10.637 "base_bdev": "7c23a3c2-2d4a-48e3-8309-4732944418d5", 00:29:10.637 "total_data_clusters": 5114, 00:29:10.637 "free_clusters": 5114, 00:29:10.637 "block_size": 512, 00:29:10.637 "cluster_size": 4194304 00:29:10.637 } 00:29:10.637 ]' 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bae5f489-ffa5-45ab-8666-b8a859bc356a") .free_clusters' 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="bae5f489-ffa5-45ab-8666-b8a859bc356a") .cluster_size' 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:10.637 20456 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:10.637 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bae5f489-ffa5-45ab-8666-b8a859bc356a lbd_nest_0 20456 00:29:10.895 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f005863b-4421-4463-a748-02ce5ad225e8 00:29:10.895 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.152 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:11.153 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f005863b-4421-4463-a748-02ce5ad225e8 00:29:11.442 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.709 18:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:11.709 18:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:11.709 18:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:11.709 18:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.709 18:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.709 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.900 Initializing NVMe Controllers 00:29:23.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.900 Initialization complete. Launching workers. 00:29:23.900 ======================================================== 00:29:23.900 Latency(us) 00:29:23.901 Device Information : IOPS MiB/s Average min max 00:29:23.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.10 0.02 20415.87 176.50 48744.68 00:29:23.901 ======================================================== 00:29:23.901 Total : 49.10 0.02 20415.87 176.50 48744.68 00:29:23.901 00:29:23.901 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.901 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.901 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.859 Initializing NVMe Controllers 00:29:33.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.859 Initialization complete. Launching workers. 00:29:33.859 ======================================================== 00:29:33.859 Latency(us) 00:29:33.859 Device Information : IOPS MiB/s Average min max 00:29:33.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.37 10.17 12298.55 4988.40 50862.05 00:29:33.859 ======================================================== 00:29:33.859 Total : 81.37 10.17 12298.55 4988.40 50862.05 00:29:33.859 00:29:33.859 18:17:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:33.859 18:17:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:33.859 18:17:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.820 Initializing NVMe Controllers 00:29:43.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.820 Initialization complete. Launching workers. 00:29:43.820 ======================================================== 00:29:43.820 Latency(us) 00:29:43.820 Device Information : IOPS MiB/s Average min max 00:29:43.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7683.86 3.75 4164.13 320.27 11821.88 00:29:43.820 ======================================================== 00:29:43.820 Total : 7683.86 3.75 4164.13 320.27 11821.88 00:29:43.820 00:29:43.820 18:17:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:43.820 18:17:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.820 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.780 Initializing NVMe Controllers 00:29:53.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.780 Initialization complete. Launching workers. 00:29:53.780 ======================================================== 00:29:53.780 Latency(us) 00:29:53.780 Device Information : IOPS MiB/s Average min max 00:29:53.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4016.30 502.04 7972.20 692.34 17065.68 00:29:53.780 ======================================================== 00:29:53.780 Total : 4016.30 502.04 7972.20 692.34 17065.68 00:29:53.780 00:29:53.780 18:18:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:53.780 18:18:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:53.780 18:18:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.780 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.737 Initializing NVMe Controllers 00:30:03.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.737 Controller IO queue size 128, less than required. 00:30:03.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:03.737 Initialization complete. Launching workers. 00:30:03.737 ======================================================== 00:30:03.737 Latency(us) 00:30:03.737 Device Information : IOPS MiB/s Average min max 00:30:03.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11781.00 5.75 10869.44 1653.73 25443.94 00:30:03.737 ======================================================== 00:30:03.737 Total : 11781.00 5.75 10869.44 1653.73 25443.94 00:30:03.737 00:30:03.737 18:18:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:03.737 18:18:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:03.737 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.768 Initializing NVMe Controllers 00:30:13.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.768 Controller IO queue size 128, less than required. 00:30:13.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.768 Initialization complete. Launching workers. 00:30:13.768 ======================================================== 00:30:13.768 Latency(us) 00:30:13.768 Device Information : IOPS MiB/s Average min max 00:30:13.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1194.00 149.25 107726.57 16977.48 221718.39 00:30:13.768 ======================================================== 00:30:13.768 Total : 1194.00 149.25 107726.57 16977.48 221718.39 00:30:13.768 00:30:13.768 18:18:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.025 18:18:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f005863b-4421-4463-a748-02ce5ad225e8 00:30:14.589 18:18:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:14.847 18:18:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c23a3c2-2d4a-48e3-8309-4732944418d5 00:30:15.411 18:18:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.411 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.411 rmmod nvme_tcp 00:30:15.411 rmmod nvme_fabrics 00:30:15.411 rmmod nvme_keyring 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2444691 ']' 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2444691 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2444691 ']' 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2444691 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.412 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444691 00:30:15.670 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:15.670 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:15.670 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444691' 00:30:15.670 killing process with pid 2444691 00:30:15.670 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2444691 00:30:15.670 18:18:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2444691 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.041 18:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.576 00:30:19.576 real 1m30.758s 00:30:19.576 user 5m33.626s 00:30:19.576 sys 0m16.289s 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:19.576 ************************************ 00:30:19.576 END TEST nvmf_perf 00:30:19.576 ************************************ 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.576 ************************************ 00:30:19.576 START TEST nvmf_fio_host 00:30:19.576 ************************************ 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:19.576 * Looking for test storage... 00:30:19.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.576 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.577 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:19.577 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:19.577 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:19.577 18:18:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.477 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:21.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:21.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:21.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:21.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.478 18:18:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:21.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:30:21.478 00:30:21.478 --- 10.0.0.2 ping statistics --- 00:30:21.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.478 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:30:21.478 00:30:21.478 --- 10.0.0.1 ping statistics --- 00:30:21.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.478 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2457273 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2457273 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2457273 ']' 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.478 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.737 [2024-07-23 18:18:29.158832] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:30:21.737 [2024-07-23 18:18:29.158916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.737 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.737 [2024-07-23 18:18:29.223490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.737 [2024-07-23 18:18:29.314086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.737 [2024-07-23 18:18:29.314135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.737 [2024-07-23 18:18:29.314163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.737 [2024-07-23 18:18:29.314174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.737 [2024-07-23 18:18:29.314183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.737 [2024-07-23 18:18:29.314263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.737 [2024-07-23 18:18:29.314393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.737 [2024-07-23 18:18:29.314420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.737 [2024-07-23 18:18:29.314423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.994 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.994 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:21.994 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:22.251 [2024-07-23 18:18:29.704641] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.251 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:22.251 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.251 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.251 18:18:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:22.509 Malloc1 00:30:22.509 18:18:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.766 18:18:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:23.024 18:18:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.281 [2024-07-23 18:18:30.742643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.281 18:18:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:23.539 18:18:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.797 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:23.797 fio-3.35 00:30:23.797 Starting 1 thread 00:30:23.797 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.402 00:30:26.402 test: (groupid=0, jobs=1): err= 0: pid=2457641: Tue Jul 23 18:18:33 2024 00:30:26.402 read: IOPS=9207, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2007msec) 00:30:26.402 slat (nsec): min=1963, max=140066, avg=2597.95, stdev=1611.13 00:30:26.402 clat (usec): min=2452, max=12892, avg=7622.51, stdev=617.71 00:30:26.402 lat (usec): min=2479, max=12895, avg=7625.11, stdev=617.63 00:30:26.402 clat percentiles (usec): 00:30:26.402 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:30:26.402 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:30:26.402 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:26.402 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[11994], 00:30:26.402 | 99.99th=[12780] 00:30:26.402 bw ( KiB/s): min=36064, max=37208, per=99.96%, avg=36816.00, stdev=511.71, samples=4 00:30:26.402 iops : min= 9016, max= 9302, avg=9204.00, stdev=127.93, samples=4 00:30:26.402 write: IOPS=9211, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2007msec); 0 zone resets 00:30:26.402 slat (usec): min=2, max=112, avg= 2.76, stdev= 1.29 00:30:26.402 clat (usec): min=1206, max=12456, avg=6233.15, stdev=507.40 00:30:26.402 lat (usec): min=1214, max=12459, avg=6235.91, stdev=507.40 00:30:26.402 clat percentiles (usec): 00:30:26.402 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:30:26.402 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:30:26.402 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6783], 95.00th=[ 6980], 00:30:26.402 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9896], 99.95th=[11469], 00:30:26.402 | 99.99th=[12387] 00:30:26.402 bw ( KiB/s): min=36504, max=37056, per=100.00%, avg=36864.00, stdev=246.40, samples=4 00:30:26.402 iops : min= 9126, max= 9264, avg=9216.00, stdev=61.60, samples=4 00:30:26.402 lat (msec) : 2=0.03%, 4=0.10%, 10=99.73%, 20=0.14% 00:30:26.402 cpu : usr=63.81%, sys=33.60%, ctx=108, majf=0, minf=6 00:30:26.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:26.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:26.402 issued rwts: total=18479,18488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:26.402 00:30:26.402 Run status group 0 (all jobs): 00:30:26.402 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2007-2007msec 00:30:26.403 WRITE: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2007-2007msec 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:26.403 18:18:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:26.403 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:26.403 fio-3.35 00:30:26.403 Starting 1 thread 00:30:26.403 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.928 00:30:28.928 test: (groupid=0, jobs=1): err= 0: pid=2457974: Tue Jul 23 18:18:36 2024 00:30:28.928 read: IOPS=7986, BW=125MiB/s (131MB/s)(250MiB/2007msec) 00:30:28.928 slat (nsec): min=2941, max=93478, avg=3729.11, stdev=1580.97 00:30:28.928 clat (usec): min=2320, max=20146, avg=9030.11, stdev=1921.11 00:30:28.928 lat (usec): min=2324, max=20150, avg=9033.84, stdev=1921.11 00:30:28.928 clat percentiles (usec): 00:30:28.928 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6652], 20.00th=[ 7373], 00:30:28.928 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9634], 00:30:28.928 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[12256], 00:30:28.928 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15270], 99.95th=[15401], 00:30:28.928 | 99.99th=[17695] 00:30:28.928 bw ( KiB/s): min=50592, max=82624, per=50.88%, avg=65008.00, stdev=13373.31, samples=4 00:30:28.928 iops : min= 3162, max= 5164, avg=4063.00, stdev=835.83, samples=4 00:30:28.928 write: IOPS=4831, BW=75.5MiB/s (79.2MB/s)(134MiB/1771msec); 0 zone resets 00:30:28.928 slat (usec): min=30, max=155, avg=33.62, stdev= 4.58 00:30:28.928 clat (usec): min=6519, max=21833, avg=12474.76, stdev=2584.30 00:30:28.928 lat (usec): min=6554, max=21864, avg=12508.37, stdev=2584.35 00:30:28.928 clat percentiles (usec): 00:30:28.928 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:30:28.928 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12387], 60.00th=[13173], 00:30:28.928 | 70.00th=[14091], 80.00th=[15008], 90.00th=[15795], 95.00th=[16581], 00:30:28.928 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20841], 99.95th=[21365], 00:30:28.928 | 99.99th=[21890] 00:30:28.928 bw ( KiB/s): min=53120, max=84480, per=87.73%, avg=67816.00, stdev=12946.66, samples=4 00:30:28.928 iops : min= 3320, max= 5280, avg=4238.50, stdev=809.17, samples=4 00:30:28.928 lat (msec) : 4=0.19%, 10=54.14%, 20=45.57%, 50=0.11% 00:30:28.928 cpu : usr=75.42%, sys=22.93%, ctx=36, majf=0, minf=2 00:30:28.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:28.928 issued rwts: total=16028,8556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:28.928 00:30:28.928 Run status group 0 (all jobs): 00:30:28.928 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=250MiB (263MB), run=2007-2007msec 00:30:28.928 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=134MiB (140MB), run=1771-1771msec 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:28.928 18:18:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:32.235 Nvme0n1 00:30:32.235 18:18:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=dcab3b63-440a-4b9a-b690-f9179c795905 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb dcab3b63-440a-4b9a-b690-f9179c795905 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=dcab3b63-440a-4b9a-b690-f9179c795905 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:35.506 { 00:30:35.506 "uuid": "dcab3b63-440a-4b9a-b690-f9179c795905", 00:30:35.506 "name": "lvs_0", 00:30:35.506 "base_bdev": "Nvme0n1", 00:30:35.506 "total_data_clusters": 930, 00:30:35.506 "free_clusters": 930, 00:30:35.506 "block_size": 512, 00:30:35.506 "cluster_size": 1073741824 00:30:35.506 } 00:30:35.506 ]' 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="dcab3b63-440a-4b9a-b690-f9179c795905") .free_clusters' 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="dcab3b63-440a-4b9a-b690-f9179c795905") .cluster_size' 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:35.506 952320 00:30:35.506 18:18:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:35.506 f79b1dc3-137c-4749-82b7-36f02aa7069e 00:30:35.764 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:35.764 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:36.021 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:36.278 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:36.535 18:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.535 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:36.535 fio-3.35 00:30:36.535 Starting 1 thread 00:30:36.535 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.059 00:30:39.059 test: (groupid=0, jobs=1): err= 0: pid=2459252: Tue Jul 23 18:18:46 2024 00:30:39.059 read: IOPS=6072, BW=23.7MiB/s (24.9MB/s)(47.6MiB/2008msec) 00:30:39.059 slat (usec): min=2, max=117, avg= 2.62, stdev= 1.68 00:30:39.059 clat (usec): min=1011, max=171102, avg=11505.20, stdev=11590.53 00:30:39.059 lat (usec): min=1014, max=171138, avg=11507.81, stdev=11590.73 00:30:39.059 clat percentiles (msec): 00:30:39.059 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:30:39.059 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:39.059 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:39.059 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:39.059 | 99.99th=[ 171] 00:30:39.059 bw ( KiB/s): min=16928, max=26808, per=99.80%, avg=24242.00, stdev=4877.19, samples=4 00:30:39.059 iops : min= 4232, max= 6702, avg=6060.50, stdev=1219.30, samples=4 00:30:39.059 write: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(47.5MiB/2008msec); 0 zone resets 00:30:39.059 slat (nsec): min=2169, max=94055, avg=2758.62, stdev=1426.59 00:30:39.059 clat (usec): min=274, max=169139, avg=9430.38, stdev=10873.38 00:30:39.059 lat (usec): min=277, max=169172, avg=9433.14, stdev=10873.61 00:30:39.059 clat percentiles (msec): 00:30:39.059 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:39.059 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:39.060 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:39.060 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:39.060 | 99.99th=[ 169] 00:30:39.060 bw ( KiB/s): min=17960, max=26560, per=99.95%, avg=24202.00, stdev=4168.00, samples=4 00:30:39.060 iops : min= 4490, max= 6640, avg=6050.50, stdev=1042.00, samples=4 00:30:39.060 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:39.060 lat (msec) : 2=0.03%, 4=0.14%, 10=59.76%, 20=39.52%, 250=0.53% 00:30:39.060 cpu : usr=61.48%, sys=36.77%, ctx=47, majf=0, minf=20 00:30:39.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:39.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:39.060 issued rwts: total=12194,12155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:39.060 00:30:39.060 Run status group 0 (all jobs): 00:30:39.060 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.6MiB (49.9MB), run=2008-2008msec 00:30:39.060 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2008-2008msec 00:30:39.060 18:18:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:39.317 18:18:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8fd78528-7f46-4dee-9deb-eea08a7ce1ec 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8fd78528-7f46-4dee-9deb-eea08a7ce1ec 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8fd78528-7f46-4dee-9deb-eea08a7ce1ec 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:40.249 18:18:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:40.506 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:40.506 { 00:30:40.506 "uuid": "dcab3b63-440a-4b9a-b690-f9179c795905", 00:30:40.506 "name": "lvs_0", 00:30:40.506 "base_bdev": "Nvme0n1", 00:30:40.506 "total_data_clusters": 930, 00:30:40.506 "free_clusters": 0, 00:30:40.506 "block_size": 512, 00:30:40.506 "cluster_size": 1073741824 00:30:40.506 }, 00:30:40.506 { 00:30:40.506 "uuid": "8fd78528-7f46-4dee-9deb-eea08a7ce1ec", 00:30:40.506 "name": "lvs_n_0", 00:30:40.506 "base_bdev": "f79b1dc3-137c-4749-82b7-36f02aa7069e", 00:30:40.506 "total_data_clusters": 237847, 00:30:40.506 "free_clusters": 237847, 00:30:40.506 "block_size": 512, 00:30:40.506 "cluster_size": 4194304 00:30:40.506 } 00:30:40.506 ]' 00:30:40.506 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8fd78528-7f46-4dee-9deb-eea08a7ce1ec") .free_clusters' 00:30:40.506 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:40.506 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8fd78528-7f46-4dee-9deb-eea08a7ce1ec") .cluster_size' 00:30:40.506 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:40.507 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:40.507 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:40.507 951388 00:30:40.507 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:41.439 25725774-8ce8-4acb-829b-5a2cb1cbf456 00:30:41.439 18:18:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:41.697 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:41.697 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:41.955 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:42.213 18:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:42.213 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:42.213 fio-3.35 00:30:42.213 Starting 1 thread 00:30:42.213 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.739 00:30:44.739 test: (groupid=0, jobs=1): err= 0: pid=2459989: Tue Jul 23 18:18:52 2024 00:30:44.739 read: IOPS=5900, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2010msec) 00:30:44.739 slat (nsec): min=1949, max=147299, avg=2546.28, stdev=2021.09 00:30:44.739 clat (usec): min=4425, max=19735, avg=11877.59, stdev=1044.69 00:30:44.739 lat (usec): min=4430, max=19738, avg=11880.14, stdev=1044.56 00:30:44.739 clat percentiles (usec): 00:30:44.739 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:30:44.739 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:30:44.739 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:30:44.739 | 99.00th=[14091], 99.50th=[14353], 99.90th=[17695], 99.95th=[17957], 00:30:44.739 | 99.99th=[19792] 00:30:44.739 bw ( KiB/s): min=22400, max=24056, per=100.00%, avg=23602.00, stdev=804.37, samples=4 00:30:44.739 iops : min= 5600, max= 6014, avg=5900.50, stdev=201.09, samples=4 00:30:44.739 write: IOPS=5898, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2010msec); 0 zone resets 00:30:44.739 slat (nsec): min=2085, max=98170, avg=2697.70, stdev=1514.38 00:30:44.739 clat (usec): min=2119, max=19188, avg=9661.54, stdev=910.24 00:30:44.739 lat (usec): min=2125, max=19191, avg=9664.24, stdev=910.19 00:30:44.739 clat percentiles (usec): 00:30:44.739 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:44.739 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:30:44.739 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:30:44.739 | 99.00th=[11731], 99.50th=[12125], 99.90th=[17957], 99.95th=[18220], 00:30:44.739 | 99.99th=[19006] 00:30:44.739 bw ( KiB/s): min=23320, max=23688, per=99.92%, avg=23576.00, stdev=173.68, samples=4 00:30:44.739 iops : min= 5830, max= 5922, avg=5894.00, stdev=43.42, samples=4 00:30:44.739 lat (msec) : 4=0.05%, 10=35.09%, 20=64.87% 00:30:44.739 cpu : usr=60.43%, sys=37.73%, ctx=89, majf=0, minf=20 00:30:44.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:44.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:44.739 issued rwts: total=11860,11856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:44.740 00:30:44.740 Run status group 0 (all jobs): 00:30:44.740 READ: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:30:44.740 WRITE: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:30:44.740 18:18:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:44.997 18:18:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:44.997 18:18:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:49.172 18:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:49.172 18:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:52.447 18:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:52.447 18:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.344 rmmod nvme_tcp 00:30:54.344 rmmod nvme_fabrics 00:30:54.344 rmmod nvme_keyring 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2457273 ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2457273 ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457273' 00:30:54.344 killing process with pid 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2457273 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.344 18:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:56.902 00:30:56.902 real 0m37.252s 00:30:56.902 user 2m22.202s 00:30:56.902 sys 0m7.167s 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.902 ************************************ 00:30:56.902 END TEST nvmf_fio_host 00:30:56.902 ************************************ 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.902 ************************************ 00:30:56.902 START TEST nvmf_failover 00:30:56.902 ************************************ 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:56.902 * Looking for test storage... 00:30:56.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.902 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:56.903 18:19:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:58.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:58.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.801 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:58.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:58.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:58.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:30:58.802 00:30:58.802 --- 10.0.0.2 ping statistics --- 00:30:58.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.802 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:30:58.802 00:30:58.802 --- 10.0.0.1 ping statistics --- 00:30:58.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.802 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2463230 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2463230 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2463230 ']' 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.802 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:58.802 [2024-07-23 18:19:06.347879] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:30:58.802 [2024-07-23 18:19:06.347966] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.802 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.802 [2024-07-23 18:19:06.413395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:59.060 [2024-07-23 18:19:06.501113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.060 [2024-07-23 18:19:06.501161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.060 [2024-07-23 18:19:06.501188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.060 [2024-07-23 18:19:06.501199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.060 [2024-07-23 18:19:06.501208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.060 [2024-07-23 18:19:06.501302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.060 [2024-07-23 18:19:06.501436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.060 [2024-07-23 18:19:06.501440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.060 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:59.317 [2024-07-23 18:19:06.899922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.317 18:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:59.574 Malloc0 00:30:59.574 18:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.831 18:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.394 18:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.394 [2024-07-23 18:19:08.047477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.650 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:00.650 [2024-07-23 18:19:08.300237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:00.907 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:01.165 [2024-07-23 18:19:08.601175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2463523 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2463523 /var/tmp/bdevperf.sock 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2463523 ']' 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.165 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.423 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:01.423 18:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.680 NVMe0n1 00:31:01.680 18:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.242 00:31:02.242 18:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2463672 00:31:02.242 18:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:02.242 18:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:03.173 18:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.430 18:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:06.706 18:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.963 00:31:06.963 18:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.220 [2024-07-23 18:19:14.640081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 [2024-07-23 18:19:14.640311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b5c0 is same with the state(5) to be set 00:31:07.220 18:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:10.498 18:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.498 [2024-07-23 18:19:17.913540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.498 18:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:11.431 18:19:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:11.689 18:19:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2463672 00:31:18.252 0 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2463523 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2463523 ']' 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2463523 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2463523 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2463523' 00:31:18.252 killing process with pid 2463523 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2463523 00:31:18.252 18:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2463523 00:31:18.252 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:18.252 [2024-07-23 18:19:08.662141] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:31:18.252 [2024-07-23 18:19:08.662241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463523 ] 00:31:18.252 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.252 [2024-07-23 18:19:08.722976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.252 [2024-07-23 18:19:08.808957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.252 Running I/O for 15 seconds... 00:31:18.252 [2024-07-23 18:19:11.051721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.252 [2024-07-23 18:19:11.051787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.252 [2024-07-23 18:19:11.051830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.051861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.051891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.051936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.051965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.051980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.051993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.252 [2024-07-23 18:19:11.052218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.252 [2024-07-23 18:19:11.052231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.052577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.052977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.052990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.053018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.253 [2024-07-23 18:19:11.053046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.053074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.053102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.053129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.253 [2024-07-23 18:19:11.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.253 [2024-07-23 18:19:11.053216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.254 [2024-07-23 18:19:11.053735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.053982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.053995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.254 [2024-07-23 18:19:11.054231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.254 [2024-07-23 18:19:11.054247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.054973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.255 [2024-07-23 18:19:11.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.255 [2024-07-23 18:19:11.055269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.256 [2024-07-23 18:19:11.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.256 [2024-07-23 18:19:11.055620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.256 [2024-07-23 18:19:11.055632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:31:18.256 [2024-07-23 18:19:11.055645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055712] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x128c150 was disconnected and freed. reset controller. 00:31:18.256 [2024-07-23 18:19:11.055729] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:18.256 [2024-07-23 18:19:11.055772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.256 [2024-07-23 18:19:11.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.256 [2024-07-23 18:19:11.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.256 [2024-07-23 18:19:11.055845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.256 [2024-07-23 18:19:11.055871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:11.055884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.256 [2024-07-23 18:19:11.059943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.256 [2024-07-23 18:19:11.059984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298bd0 (9): Bad file descriptor 00:31:18.256 [2024-07-23 18:19:11.169821] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:18.256 [2024-07-23 18:19:14.642470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.256 [2024-07-23 18:19:14.642952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.256 [2024-07-23 18:19:14.642967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.642981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.642995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.257 [2024-07-23 18:19:14.643666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.257 [2024-07-23 18:19:14.643681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.643973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.643987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.258 [2024-07-23 18:19:14.644416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.258 [2024-07-23 18:19:14.644445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.258 [2024-07-23 18:19:14.644495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:31:18.258 [2024-07-23 18:19:14.644509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.258 [2024-07-23 18:19:14.644590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.258 [2024-07-23 18:19:14.644619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.258 [2024-07-23 18:19:14.644646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.258 [2024-07-23 18:19:14.644680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1298bd0 is same with the state(5) to be set 00:31:18.258 [2024-07-23 18:19:14.644851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.258 [2024-07-23 18:19:14.644870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.258 [2024-07-23 18:19:14.644883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:31:18.258 [2024-07-23 18:19:14.644896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.258 [2024-07-23 18:19:14.644912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.258 [2024-07-23 18:19:14.644925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.258 [2024-07-23 18:19:14.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.644949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.644963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.644974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.644986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.644998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.645961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.645974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.645987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.645997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.646008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.646021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.259 [2024-07-23 18:19:14.646033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.259 [2024-07-23 18:19:14.646044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.259 [2024-07-23 18:19:14.646055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:31:18.259 [2024-07-23 18:19:14.646067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.646956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.260 [2024-07-23 18:19:14.646969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.260 [2024-07-23 18:19:14.646980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.260 [2024-07-23 18:19:14.646991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:31:18.260 [2024-07-23 18:19:14.647004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.647954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.647967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.647980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.261 [2024-07-23 18:19:14.647992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:31:18.261 [2024-07-23 18:19:14.648005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.261 [2024-07-23 18:19:14.648018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.261 [2024-07-23 18:19:14.648029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.262 [2024-07-23 18:19:14.648954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.262 [2024-07-23 18:19:14.648964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.262 [2024-07-23 18:19:14.648975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:31:18.262 [2024-07-23 18:19:14.648987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.649405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.649416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.649428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.649441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.263 [2024-07-23 18:19:14.655803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.263 [2024-07-23 18:19:14.655813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:31:18.263 [2024-07-23 18:19:14.655826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.263 [2024-07-23 18:19:14.655839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.655850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.655860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.655885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.655896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.655906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.655918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.655931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.655941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.655952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.655964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.655977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.655987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95960 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95432 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.264 [2024-07-23 18:19:14.656831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.264 [2024-07-23 18:19:14.656842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:31:18.264 [2024-07-23 18:19:14.656858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.264 [2024-07-23 18:19:14.656923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bc6b0 was disconnected and freed. reset controller. 00:31:18.264 [2024-07-23 18:19:14.656941] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:18.264 [2024-07-23 18:19:14.656956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.264 [2024-07-23 18:19:14.657011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298bd0 (9): Bad file descriptor 00:31:18.264 [2024-07-23 18:19:14.660917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.264 [2024-07-23 18:19:14.824530] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:18.265 [2024-07-23 18:19:19.210078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.210724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.210978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.210992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.265 [2024-07-23 18:19:19.211320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.211352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.265 [2024-07-23 18:19:19.211367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.265 [2024-07-23 18:19:19.211381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.211977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.211992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.266 [2024-07-23 18:19:19.212524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.266 [2024-07-23 18:19:19.212539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.212984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.267 [2024-07-23 18:19:19.213482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.267 [2024-07-23 18:19:19.213511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.267 [2024-07-23 18:19:19.213540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.267 [2024-07-23 18:19:19.213556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.268 [2024-07-23 18:19:19.213952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.213967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcca0 is same with the state(5) to be set 00:31:18.268 [2024-07-23 18:19:19.213985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:18.268 [2024-07-23 18:19:19.213997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.268 [2024-07-23 18:19:19.214009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56672 len:8 PRP1 0x0 PRP2 0x0 00:31:18.268 [2024-07-23 18:19:19.214022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.214090] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bcca0 was disconnected and freed. reset controller. 00:31:18.268 [2024-07-23 18:19:19.214108] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:18.268 [2024-07-23 18:19:19.214144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.268 [2024-07-23 18:19:19.214161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.214176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.268 [2024-07-23 18:19:19.214189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.214203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.268 [2024-07-23 18:19:19.214215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.214228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:18.268 [2024-07-23 18:19:19.214241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.268 [2024-07-23 18:19:19.214254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.268 [2024-07-23 18:19:19.218363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.268 [2024-07-23 18:19:19.218403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298bd0 (9): Bad file descriptor 00:31:18.268 [2024-07-23 18:19:19.291885] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:18.268 00:31:18.268 Latency(us) 00:31:18.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:18.268 Verification LBA range: start 0x0 length 0x4000 00:31:18.268 NVMe0n1 : 15.01 8526.86 33.31 898.43 0.00 13553.11 801.00 21068.61 00:31:18.268 =================================================================================================================== 00:31:18.268 Total : 8526.86 33.31 898.43 0.00 13553.11 801.00 21068.61 00:31:18.268 Received shutdown signal, test time was about 15.000000 seconds 00:31:18.268 00:31:18.268 Latency(us) 00:31:18.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.268 =================================================================================================================== 00:31:18.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2465490 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2465490 /var/tmp/bdevperf.sock 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2465490 ']' 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:18.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:18.268 [2024-07-23 18:19:25.735850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:18.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:18.533 [2024-07-23 18:19:25.984552] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:18.533 18:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.097 NVMe0n1 00:31:19.097 18:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.354 00:31:19.354 18:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.610 00:31:19.610 18:19:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.610 18:19:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:19.867 18:19:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.124 18:19:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:23.399 18:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:23.399 18:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:23.399 18:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2466166 00:31:23.399 18:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:23.399 18:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2466166 00:31:24.769 0 00:31:24.769 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:24.769 [2024-07-23 18:19:25.247830] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:31:24.769 [2024-07-23 18:19:25.247926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465490 ] 00:31:24.769 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.769 [2024-07-23 18:19:25.308900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.769 [2024-07-23 18:19:25.391731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.769 [2024-07-23 18:19:27.698935] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:24.769 [2024-07-23 18:19:27.699020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.769 [2024-07-23 18:19:27.699043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.769 [2024-07-23 18:19:27.699074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.769 [2024-07-23 18:19:27.699088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.769 [2024-07-23 18:19:27.699102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.769 [2024-07-23 18:19:27.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.769 [2024-07-23 18:19:27.699130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.769 [2024-07-23 18:19:27.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.769 [2024-07-23 18:19:27.699155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:24.769 [2024-07-23 18:19:27.699199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:24.769 [2024-07-23 18:19:27.699229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c6bd0 (9): Bad file descriptor 00:31:24.769 [2024-07-23 18:19:27.791505] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:24.769 Running I/O for 1 seconds... 00:31:24.769 00:31:24.769 Latency(us) 00:31:24.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.769 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:24.769 Verification LBA range: start 0x0 length 0x4000 00:31:24.769 NVMe0n1 : 1.04 8368.61 32.69 0.00 0.00 14638.54 3373.89 45826.65 00:31:24.769 =================================================================================================================== 00:31:24.769 Total : 8368.61 32.69 0.00 0.00 14638.54 3373.89 45826.65 00:31:24.769 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:24.769 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:24.769 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:25.026 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:25.026 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:25.283 18:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:25.540 18:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2465490 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2465490 ']' 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2465490 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2465490 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2465490' 00:31:28.816 killing process with pid 2465490 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2465490 00:31:28.816 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2465490 00:31:29.074 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:29.074 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:29.331 rmmod nvme_tcp 00:31:29.331 rmmod nvme_fabrics 00:31:29.331 rmmod nvme_keyring 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2463230 ']' 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2463230 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2463230 ']' 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2463230 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:29.331 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2463230 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2463230' 00:31:29.332 killing process with pid 2463230 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2463230 00:31:29.332 18:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2463230 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.591 18:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:32.129 00:31:32.129 real 0m35.151s 00:31:32.129 user 2m3.766s 00:31:32.129 sys 0m5.987s 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 ************************************ 00:31:32.129 END TEST nvmf_failover 00:31:32.129 ************************************ 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.129 ************************************ 00:31:32.129 START TEST nvmf_host_discovery 00:31:32.129 ************************************ 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:32.129 * Looking for test storage... 00:31:32.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:32.129 18:19:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:34.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:34.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:34.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.030 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:34.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:34.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:31:34.031 00:31:34.031 --- 10.0.0.2 ping statistics --- 00:31:34.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.031 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:31:34.031 00:31:34.031 --- 10.0.0.1 ping statistics --- 00:31:34.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.031 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2468757 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2468757 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2468757 ']' 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:34.031 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.031 [2024-07-23 18:19:41.577281] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:31:34.031 [2024-07-23 18:19:41.577372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.031 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.031 [2024-07-23 18:19:41.643450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.289 [2024-07-23 18:19:41.727910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.289 [2024-07-23 18:19:41.727996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.289 [2024-07-23 18:19:41.728022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.289 [2024-07-23 18:19:41.728033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.289 [2024-07-23 18:19:41.728043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.289 [2024-07-23 18:19:41.728067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.289 [2024-07-23 18:19:41.866456] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.289 [2024-07-23 18:19:41.874715] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.289 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.289 null0 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.290 null1 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2468893 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2468893 /tmp/host.sock 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2468893 ']' 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:34.290 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:34.290 18:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.290 [2024-07-23 18:19:41.947243] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:31:34.290 [2024-07-23 18:19:41.947346] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468893 ] 00:31:34.547 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.547 [2024-07-23 18:19:42.008109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.547 [2024-07-23 18:19:42.095374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.547 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:34.805 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.806 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.806 [2024-07-23 18:19:42.464172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.063 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:35.064 18:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:35.629 [2024-07-23 18:19:43.263943] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.629 [2024-07-23 18:19:43.263967] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.629 [2024-07-23 18:19:43.263993] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.886 [2024-07-23 18:19:43.350289] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:36.144 [2024-07-23 18:19:43.579029] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:36.144 [2024-07-23 18:19:43.579050] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.144 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.145 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 [2024-07-23 18:19:43.912359] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:36.403 [2024-07-23 18:19:43.912546] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:36.403 [2024-07-23 18:19:43.912576] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.403 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:36.404 18:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:36.404 18:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.404 18:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:36.404 18:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:36.404 [2024-07-23 18:19:44.040404] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:36.968 [2024-07-23 18:19:44.340700] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:36.968 [2024-07-23 18:19:44.340736] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:36.968 [2024-07-23 18:19:44.340744] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.540 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.540 [2024-07-23 18:19:45.120132] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:37.540 [2024-07-23 18:19:45.120163] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:37.540 [2024-07-23 18:19:45.120497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.541 [2024-07-23 18:19:45.120526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.541 [2024-07-23 18:19:45.120543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.541 [2024-07-23 18:19:45.120557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.541 [2024-07-23 18:19:45.120571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.541 [2024-07-23 18:19:45.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.541 [2024-07-23 18:19:45.120600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.541 [2024-07-23 18:19:45.120623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.541 [2024-07-23 18:19:45.120643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:37.541 [2024-07-23 18:19:45.130498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.541 [2024-07-23 18:19:45.140539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.140748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.140776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.541 [2024-07-23 18:19:45.140792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 [2024-07-23 18:19:45.140814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 [2024-07-23 18:19:45.140836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.541 [2024-07-23 18:19:45.140850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.541 [2024-07-23 18:19:45.140864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.541 [2024-07-23 18:19:45.140884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.541 [2024-07-23 18:19:45.150638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.150852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.150880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.541 [2024-07-23 18:19:45.150896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 [2024-07-23 18:19:45.150918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 [2024-07-23 18:19:45.150938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.541 [2024-07-23 18:19:45.150951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.541 [2024-07-23 18:19:45.150973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.541 [2024-07-23 18:19:45.150993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.541 [2024-07-23 18:19:45.160723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.160986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.161014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.541 [2024-07-23 18:19:45.161029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 [2024-07-23 18:19:45.161051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 [2024-07-23 18:19:45.161071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.541 [2024-07-23 18:19:45.161084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.541 [2024-07-23 18:19:45.161097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.541 [2024-07-23 18:19:45.161115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.541 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:37.541 [2024-07-23 18:19:45.170810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.171074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.171102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.541 [2024-07-23 18:19:45.171119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 [2024-07-23 18:19:45.171142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 [2024-07-23 18:19:45.171162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.541 [2024-07-23 18:19:45.171176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.541 [2024-07-23 18:19:45.171189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.541 [2024-07-23 18:19:45.171214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.541 [2024-07-23 18:19:45.180897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.181051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.181080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.541 [2024-07-23 18:19:45.181096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.541 [2024-07-23 18:19:45.181118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.541 [2024-07-23 18:19:45.181138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.541 [2024-07-23 18:19:45.181151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.541 [2024-07-23 18:19:45.181164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.541 [2024-07-23 18:19:45.181182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.541 [2024-07-23 18:19:45.190985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.541 [2024-07-23 18:19:45.191157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.541 [2024-07-23 18:19:45.191186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.542 [2024-07-23 18:19:45.191202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.542 [2024-07-23 18:19:45.191224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.542 [2024-07-23 18:19:45.191243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.542 [2024-07-23 18:19:45.191257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.542 [2024-07-23 18:19:45.191273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.542 [2024-07-23 18:19:45.191301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.839 [2024-07-23 18:19:45.201072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.839 [2024-07-23 18:19:45.201226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.839 [2024-07-23 18:19:45.201255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7660 with addr=10.0.0.2, port=4420 00:31:37.839 [2024-07-23 18:19:45.201272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7660 is same with the state(5) to be set 00:31:37.839 [2024-07-23 18:19:45.201307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7660 (9): Bad file descriptor 00:31:37.839 [2024-07-23 18:19:45.201340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.839 [2024-07-23 18:19:45.201355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.839 [2024-07-23 18:19:45.201367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.839 [2024-07-23 18:19:45.201386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.839 [2024-07-23 18:19:45.207897] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:37.839 [2024-07-23 18:19:45.207927] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.839 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.840 18:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.214 [2024-07-23 18:19:46.492440] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:39.214 [2024-07-23 18:19:46.492477] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:39.214 [2024-07-23 18:19:46.492501] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:39.214 [2024-07-23 18:19:46.578778] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:39.473 [2024-07-23 18:19:46.890464] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:39.473 [2024-07-23 18:19:46.890511] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.473 request: 00:31:39.473 { 00:31:39.473 "name": "nvme", 00:31:39.473 "trtype": "tcp", 00:31:39.473 "traddr": "10.0.0.2", 00:31:39.473 "adrfam": "ipv4", 00:31:39.473 "trsvcid": "8009", 00:31:39.473 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:39.473 "wait_for_attach": true, 00:31:39.473 "method": "bdev_nvme_start_discovery", 00:31:39.473 "req_id": 1 00:31:39.473 } 00:31:39.473 Got JSON-RPC error response 00:31:39.473 response: 00:31:39.473 { 00:31:39.473 "code": -17, 00:31:39.473 "message": "File exists" 00:31:39.473 } 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.473 request: 00:31:39.473 { 00:31:39.473 "name": "nvme_second", 00:31:39.473 "trtype": "tcp", 00:31:39.473 "traddr": "10.0.0.2", 00:31:39.473 "adrfam": "ipv4", 00:31:39.473 "trsvcid": "8009", 00:31:39.473 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:39.473 "wait_for_attach": true, 00:31:39.473 "method": "bdev_nvme_start_discovery", 00:31:39.473 "req_id": 1 00:31:39.473 } 00:31:39.473 Got JSON-RPC error response 00:31:39.473 response: 00:31:39.473 { 00:31:39.473 "code": -17, 00:31:39.473 "message": "File exists" 00:31:39.473 } 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:39.473 18:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:39.473 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.473 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.474 18:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.843 [2024-07-23 18:19:48.081942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.843 [2024-07-23 18:19:48.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f21f0 with addr=10.0.0.2, port=8010 00:31:40.843 [2024-07-23 18:19:48.082016] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:40.843 [2024-07-23 18:19:48.082030] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:40.843 [2024-07-23 18:19:48.082043] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:41.775 [2024-07-23 18:19:49.084214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.775 [2024-07-23 18:19:49.084263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e63d0 with addr=10.0.0.2, port=8010 00:31:41.775 [2024-07-23 18:19:49.084285] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:41.775 [2024-07-23 18:19:49.084298] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:41.775 [2024-07-23 18:19:49.084309] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:42.706 [2024-07-23 18:19:50.086502] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:42.706 request: 00:31:42.706 { 00:31:42.706 "name": "nvme_second", 00:31:42.706 "trtype": "tcp", 00:31:42.706 "traddr": "10.0.0.2", 00:31:42.706 "adrfam": "ipv4", 00:31:42.706 "trsvcid": "8010", 00:31:42.706 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:42.706 "wait_for_attach": false, 00:31:42.706 "attach_timeout_ms": 3000, 00:31:42.706 "method": "bdev_nvme_start_discovery", 00:31:42.706 "req_id": 1 00:31:42.706 } 00:31:42.706 Got JSON-RPC error response 00:31:42.706 response: 00:31:42.706 { 00:31:42.706 "code": -110, 00:31:42.706 "message": "Connection timed out" 00:31:42.706 } 00:31:42.706 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:42.706 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:42.706 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:42.706 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:42.706 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2468893 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:42.707 rmmod nvme_tcp 00:31:42.707 rmmod nvme_fabrics 00:31:42.707 rmmod nvme_keyring 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2468757 ']' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2468757 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2468757 ']' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2468757 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2468757 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2468757' 00:31:42.707 killing process with pid 2468757 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2468757 00:31:42.707 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2468757 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.967 18:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:44.869 00:31:44.869 real 0m13.177s 00:31:44.869 user 0m19.014s 00:31:44.869 sys 0m2.775s 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.869 ************************************ 00:31:44.869 END TEST nvmf_host_discovery 00:31:44.869 ************************************ 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.869 ************************************ 00:31:44.869 START TEST nvmf_host_multipath_status 00:31:44.869 ************************************ 00:31:44.869 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:45.127 * Looking for test storage... 00:31:45.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.127 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:45.128 18:19:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.028 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:47.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:47.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:47.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:47.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.029 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.287 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:47.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:31:47.287 00:31:47.287 --- 10.0.0.2 ping statistics --- 00:31:47.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.288 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:31:47.288 00:31:47.288 --- 10.0.0.1 ping statistics --- 00:31:47.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.288 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2471927 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2471927 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2471927 ']' 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.288 18:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.288 [2024-07-23 18:19:54.864161] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:31:47.288 [2024-07-23 18:19:54.864249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.288 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.288 [2024-07-23 18:19:54.929054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:47.546 [2024-07-23 18:19:55.017329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.546 [2024-07-23 18:19:55.017383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.546 [2024-07-23 18:19:55.017412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.546 [2024-07-23 18:19:55.017424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.546 [2024-07-23 18:19:55.017434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.546 [2024-07-23 18:19:55.017498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.546 [2024-07-23 18:19:55.017504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2471927 00:31:47.546 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:47.803 [2024-07-23 18:19:55.429140] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.803 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:48.369 Malloc0 00:31:48.369 18:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:48.626 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:48.884 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.884 [2024-07-23 18:19:56.525768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:49.142 [2024-07-23 18:19:56.774365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2472211 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2472211 /var/tmp/bdevperf.sock 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2472211 ']' 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:49.142 18:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 18:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.708 18:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:49.708 18:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:49.708 18:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:50.273 Nvme0n1 00:31:50.273 18:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:50.838 Nvme0n1 00:31:50.838 18:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:50.838 18:19:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:52.738 18:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:52.738 18:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:52.998 18:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:53.256 18:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:54.629 18:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:54.629 18:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.629 18:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.629 18:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.629 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.629 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.629 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.629 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.887 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.887 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.887 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.887 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.145 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.145 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.145 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.145 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.403 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.403 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.403 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.403 18:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.661 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.661 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:55.661 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.661 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.920 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.920 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:55.920 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:56.182 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:56.483 18:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:57.413 18:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:57.413 18:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:57.413 18:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.413 18:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.670 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.670 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:57.670 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.670 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:57.927 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.927 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:57.927 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.927 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.185 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.185 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.185 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.185 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.443 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.443 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:58.443 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.443 18:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.701 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.701 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:58.701 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.701 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:58.959 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.959 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:58.959 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:59.216 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:59.475 18:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:00.408 18:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:00.408 18:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:00.408 18:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.408 18:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.666 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.666 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:00.666 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.666 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:00.924 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.924 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:00.924 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.924 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:01.181 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.181 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:01.181 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.181 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.439 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.439 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:01.439 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.439 18:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.697 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.697 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:01.697 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.697 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:01.955 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.955 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:01.955 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:02.213 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:02.471 18:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:03.404 18:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:03.404 18:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:03.404 18:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.404 18:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.661 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.661 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:03.661 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.661 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:03.919 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.919 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:03.919 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.919 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.176 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.176 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.176 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.176 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.434 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.434 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:04.434 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.434 18:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.691 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.691 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:04.691 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.691 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:04.949 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.949 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:04.949 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:05.206 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:05.464 18:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:06.394 18:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:06.394 18:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:06.394 18:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.394 18:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.650 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.650 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:06.650 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.650 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:06.906 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.906 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:06.907 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.907 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.163 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.163 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.163 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.163 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.420 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.420 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:07.420 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.420 18:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.677 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.677 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:07.677 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.677 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:07.934 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.934 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:07.934 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:08.191 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:08.448 18:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:09.379 18:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:09.379 18:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:09.379 18:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.379 18:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.637 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:09.637 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:09.637 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.637 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:09.895 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.895 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:09.895 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.895 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.152 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.152 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.152 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.152 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.409 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.409 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:10.409 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.409 18:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.666 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.666 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:10.666 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.666 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:10.924 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.924 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:11.181 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:11.181 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:11.439 18:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:11.724 18:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:12.665 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:12.665 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:12.665 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.665 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:12.923 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.923 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:12.923 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.923 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:13.181 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.181 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:13.181 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.181 18:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:13.440 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.440 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:13.440 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.440 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:13.697 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.697 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:13.697 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.698 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:13.955 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.955 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:13.955 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.955 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:14.213 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.213 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:14.213 18:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:14.471 18:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:14.729 18:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:15.662 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:15.662 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:15.662 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.662 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:15.918 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:15.918 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:15.918 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.918 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.174 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.174 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.174 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.174 18:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:16.432 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.432 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:16.432 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.432 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:16.689 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.689 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:16.689 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.689 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:16.947 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.947 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:16.947 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.947 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:17.205 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.205 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:17.205 18:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:17.463 18:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:17.720 18:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.094 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:19.352 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.352 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:19.352 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.352 18:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:19.609 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.609 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:19.610 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.610 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:19.867 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.867 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:19.867 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.867 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:20.125 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.125 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:20.125 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.125 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:20.382 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.382 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:20.382 18:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:20.640 18:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:20.898 18:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:21.831 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:21.831 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:21.831 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.831 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:22.088 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.088 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:22.088 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.088 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:22.346 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:22.346 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:22.346 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.346 18:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:22.604 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.604 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:22.604 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.604 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:22.862 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.862 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:22.862 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.862 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:23.120 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.120 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:23.120 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.120 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2472211 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2472211 ']' 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2472211 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2472211 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2472211' 00:32:23.378 killing process with pid 2472211 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2472211 00:32:23.378 18:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2472211 00:32:23.640 Connection closed with partial response: 00:32:23.640 00:32:23.640 00:32:23.640 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2472211 00:32:23.640 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.640 [2024-07-23 18:19:56.831281] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:32:23.640 [2024-07-23 18:19:56.831392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472211 ] 00:32:23.640 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.640 [2024-07-23 18:19:56.890773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.640 [2024-07-23 18:19:56.978883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.640 Running I/O for 90 seconds... 00:32:23.640 [2024-07-23 18:20:12.658219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.658918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.658934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:23.640 [2024-07-23 18:20:12.659299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.640 [2024-07-23 18:20:12.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.659370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.659966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.659989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.660044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.660083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.660121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.641 [2024-07-23 18:20:12.660160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.660974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.641 [2024-07-23 18:20:12.661393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:23.641 [2024-07-23 18:20:12.661419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.642 [2024-07-23 18:20:12.661440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.642 [2024-07-23 18:20:12.661484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.642 [2024-07-23 18:20:12.661526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.642 [2024-07-23 18:20:12.661569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.661958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.661974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.662977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.662993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.663020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.663036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.663063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.663080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.663111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.663128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.663155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.642 [2024-07-23 18:20:12.663172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.642 [2024-07-23 18:20:12.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.663960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.663986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:12.664416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:12.664615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:12.664651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:12.664670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:28.334646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.643 [2024-07-23 18:20:28.334687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.643 [2024-07-23 18:20:28.334940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:23.643 [2024-07-23 18:20:28.334961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.334978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.334999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.335200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.335237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.335274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.335304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.335331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.336975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.336990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.337253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.337290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.644 [2024-07-23 18:20:28.337336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.644 [2024-07-23 18:20:28.337448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:23.644 [2024-07-23 18:20:28.337469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.337968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.337984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.645 [2024-07-23 18:20:28.338368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.645 [2024-07-23 18:20:28.338406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.645 [2024-07-23 18:20:28.338443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.645 [2024-07-23 18:20:28.338480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:23.645 [2024-07-23 18:20:28.338502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.645 [2024-07-23 18:20:28.338518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.645 Received shutdown signal, test time was about 32.411805 seconds 00:32:23.645 00:32:23.645 Latency(us) 00:32:23.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.645 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:23.645 Verification LBA range: start 0x0 length 0x4000 00:32:23.645 Nvme0n1 : 32.41 8019.59 31.33 0.00 0.00 15933.94 1699.08 4026531.84 00:32:23.645 =================================================================================================================== 00:32:23.645 Total : 8019.59 31.33 0.00 0.00 15933.94 1699.08 4026531.84 00:32:23.645 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:23.903 rmmod nvme_tcp 00:32:23.903 rmmod nvme_fabrics 00:32:23.903 rmmod nvme_keyring 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2471927 ']' 00:32:23.903 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2471927 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2471927 ']' 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2471927 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2471927 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2471927' 00:32:23.904 killing process with pid 2471927 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2471927 00:32:23.904 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2471927 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.163 18:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.700 00:32:26.700 real 0m41.305s 00:32:26.700 user 2m2.955s 00:32:26.700 sys 0m11.256s 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:26.700 ************************************ 00:32:26.700 END TEST nvmf_host_multipath_status 00:32:26.700 ************************************ 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.700 ************************************ 00:32:26.700 START TEST nvmf_discovery_remove_ifc 00:32:26.700 ************************************ 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:26.700 * Looking for test storage... 00:32:26.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.700 18:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:28.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:28.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:28.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.599 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:28.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:28.600 18:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:28.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:32:28.600 00:32:28.600 --- 10.0.0.2 ping statistics --- 00:32:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.600 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:32:28.600 00:32:28.600 --- 10.0.0.1 ping statistics --- 00:32:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.600 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2478289 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2478289 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2478289 ']' 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.600 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.600 [2024-07-23 18:20:36.131234] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:32:28.600 [2024-07-23 18:20:36.131309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.600 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.600 [2024-07-23 18:20:36.196120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.859 [2024-07-23 18:20:36.279147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.859 [2024-07-23 18:20:36.279203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.859 [2024-07-23 18:20:36.279230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.859 [2024-07-23 18:20:36.279241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.859 [2024-07-23 18:20:36.279250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.859 [2024-07-23 18:20:36.279275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.859 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.859 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:28.859 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.859 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:28.859 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 [2024-07-23 18:20:36.422843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.860 [2024-07-23 18:20:36.431017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:28.860 null0 00:32:28.860 [2024-07-23 18:20:36.462975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2478425 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2478425 /tmp/host.sock 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2478425 ']' 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:28.860 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.860 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.150 [2024-07-23 18:20:36.526556] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:32:29.150 [2024-07-23 18:20:36.526646] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478425 ] 00:32:29.150 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.150 [2024-07-23 18:20:36.585018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.150 [2024-07-23 18:20:36.668848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.150 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.408 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.408 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:29.408 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.408 18:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.339 [2024-07-23 18:20:37.881090] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:30.339 [2024-07-23 18:20:37.881114] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:30.339 [2024-07-23 18:20:37.881135] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:30.597 [2024-07-23 18:20:38.010579] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:30.597 [2024-07-23 18:20:38.111892] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:30.597 [2024-07-23 18:20:38.111947] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:30.597 [2024-07-23 18:20:38.111982] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:30.597 [2024-07-23 18:20:38.112002] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:30.597 [2024-07-23 18:20:38.112023] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.597 [2024-07-23 18:20:38.118722] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x580300 was disconnected and freed. delete nvme_qpair. 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:30.597 18:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.971 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.972 18:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.902 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.903 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:32.903 18:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:33.836 18:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.767 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.024 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:35.024 18:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:35.958 18:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.958 [2024-07-23 18:20:43.553955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:35.958 [2024-07-23 18:20:43.554044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.958 [2024-07-23 18:20:43.554066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.958 [2024-07-23 18:20:43.554085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.958 [2024-07-23 18:20:43.554099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.958 [2024-07-23 18:20:43.554112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.958 [2024-07-23 18:20:43.554125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.958 [2024-07-23 18:20:43.554138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.958 [2024-07-23 18:20:43.554151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.958 [2024-07-23 18:20:43.554165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.958 [2024-07-23 18:20:43.554178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.958 [2024-07-23 18:20:43.554192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x546d00 is same with the state(5) to be set 00:32:35.958 [2024-07-23 18:20:43.563973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x546d00 (9): Bad file descriptor 00:32:35.958 [2024-07-23 18:20:43.574018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.888 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:37.145 [2024-07-23 18:20:44.637347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:37.145 [2024-07-23 18:20:44.637408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x546d00 with addr=10.0.0.2, port=4420 00:32:37.145 [2024-07-23 18:20:44.637431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x546d00 is same with the state(5) to be set 00:32:37.145 [2024-07-23 18:20:44.637473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x546d00 (9): Bad file descriptor 00:32:37.145 [2024-07-23 18:20:44.637904] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:37.145 [2024-07-23 18:20:44.637947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.145 [2024-07-23 18:20:44.637964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.145 [2024-07-23 18:20:44.637981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.145 [2024-07-23 18:20:44.638005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.145 [2024-07-23 18:20:44.638021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.145 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.145 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:37.145 18:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:38.077 [2024-07-23 18:20:45.640518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:38.077 [2024-07-23 18:20:45.640553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:38.077 [2024-07-23 18:20:45.640582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:38.077 [2024-07-23 18:20:45.640596] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:38.077 [2024-07-23 18:20:45.640619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.077 [2024-07-23 18:20:45.640668] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:38.077 [2024-07-23 18:20:45.640705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.077 [2024-07-23 18:20:45.640741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.077 [2024-07-23 18:20:45.640759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.077 [2024-07-23 18:20:45.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.077 [2024-07-23 18:20:45.640786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.077 [2024-07-23 18:20:45.640799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.077 [2024-07-23 18:20:45.640812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.077 [2024-07-23 18:20:45.640825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.077 [2024-07-23 18:20:45.640845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.077 [2024-07-23 18:20:45.640859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.077 [2024-07-23 18:20:45.640871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:38.077 [2024-07-23 18:20:45.640981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x546160 (9): Bad file descriptor 00:32:38.077 [2024-07-23 18:20:45.641999] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:38.077 [2024-07-23 18:20:45.642019] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.077 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.335 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.335 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:38.335 18:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:39.264 18:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:40.196 [2024-07-23 18:20:47.656092] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:40.196 [2024-07-23 18:20:47.656127] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:40.196 [2024-07-23 18:20:47.656149] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.196 [2024-07-23 18:20:47.744424] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:40.196 [2024-07-23 18:20:47.807083] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:40.196 [2024-07-23 18:20:47.807130] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:40.196 [2024-07-23 18:20:47.807161] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:40.196 [2024-07-23 18:20:47.807181] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:40.196 [2024-07-23 18:20:47.807193] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:40.196 [2024-07-23 18:20:47.814477] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x5352b0 was disconnected and freed. delete nvme_qpair. 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:40.196 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2478425 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2478425 ']' 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2478425 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2478425 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2478425' 00:32:40.454 killing process with pid 2478425 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2478425 00:32:40.454 18:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2478425 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.454 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.454 rmmod nvme_tcp 00:32:40.711 rmmod nvme_fabrics 00:32:40.711 rmmod nvme_keyring 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2478289 ']' 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2478289 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2478289 ']' 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2478289 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.711 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2478289 00:32:40.712 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:40.712 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:40.712 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2478289' 00:32:40.712 killing process with pid 2478289 00:32:40.712 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2478289 00:32:40.712 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2478289 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.971 18:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:42.876 00:32:42.876 real 0m16.587s 00:32:42.876 user 0m23.534s 00:32:42.876 sys 0m2.880s 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:42.876 ************************************ 00:32:42.876 END TEST nvmf_discovery_remove_ifc 00:32:42.876 ************************************ 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.876 ************************************ 00:32:42.876 START TEST nvmf_identify_kernel_target 00:32:42.876 ************************************ 00:32:42.876 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:43.135 * Looking for test storage... 00:32:43.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:43.135 18:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:45.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:45.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.035 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:45.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:45.296 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:45.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:32:45.296 00:32:45.296 --- 10.0.0.2 ping statistics --- 00:32:45.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.296 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:45.296 00:32:45.296 --- 10.0.0.1 ping statistics --- 00:32:45.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.296 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.296 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:45.297 18:20:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:46.677 Waiting for block devices as requested 00:32:46.677 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:46.677 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:46.677 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:46.933 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:46.933 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:46.933 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:46.933 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:47.192 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:47.192 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:47.192 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:47.192 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:47.472 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:47.472 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:47.472 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:47.732 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:47.732 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:47.732 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:47.990 No valid GPT data, bailing 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:47.990 00:32:47.990 Discovery Log Number of Records 2, Generation counter 2 00:32:47.990 =====Discovery Log Entry 0====== 00:32:47.990 trtype: tcp 00:32:47.990 adrfam: ipv4 00:32:47.990 subtype: current discovery subsystem 00:32:47.990 treq: not specified, sq flow control disable supported 00:32:47.990 portid: 1 00:32:47.990 trsvcid: 4420 00:32:47.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:47.990 traddr: 10.0.0.1 00:32:47.990 eflags: none 00:32:47.990 sectype: none 00:32:47.990 =====Discovery Log Entry 1====== 00:32:47.990 trtype: tcp 00:32:47.990 adrfam: ipv4 00:32:47.990 subtype: nvme subsystem 00:32:47.990 treq: not specified, sq flow control disable supported 00:32:47.990 portid: 1 00:32:47.990 trsvcid: 4420 00:32:47.990 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:47.990 traddr: 10.0.0.1 00:32:47.990 eflags: none 00:32:47.990 sectype: none 00:32:47.990 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:47.990 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:47.990 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.249 ===================================================== 00:32:48.249 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:48.249 ===================================================== 00:32:48.249 Controller Capabilities/Features 00:32:48.249 ================================ 00:32:48.249 Vendor ID: 0000 00:32:48.249 Subsystem Vendor ID: 0000 00:32:48.249 Serial Number: 460c4c6064dd27bd558e 00:32:48.249 Model Number: Linux 00:32:48.249 Firmware Version: 6.7.0-68 00:32:48.249 Recommended Arb Burst: 0 00:32:48.249 IEEE OUI Identifier: 00 00 00 00:32:48.249 Multi-path I/O 00:32:48.249 May have multiple subsystem ports: No 00:32:48.249 May have multiple controllers: No 00:32:48.249 Associated with SR-IOV VF: No 00:32:48.249 Max Data Transfer Size: Unlimited 00:32:48.249 Max Number of Namespaces: 0 00:32:48.249 Max Number of I/O Queues: 1024 00:32:48.249 NVMe Specification Version (VS): 1.3 00:32:48.249 NVMe Specification Version (Identify): 1.3 00:32:48.249 Maximum Queue Entries: 1024 00:32:48.249 Contiguous Queues Required: No 00:32:48.249 Arbitration Mechanisms Supported 00:32:48.249 Weighted Round Robin: Not Supported 00:32:48.249 Vendor Specific: Not Supported 00:32:48.249 Reset Timeout: 7500 ms 00:32:48.249 Doorbell Stride: 4 bytes 00:32:48.249 NVM Subsystem Reset: Not Supported 00:32:48.249 Command Sets Supported 00:32:48.249 NVM Command Set: Supported 00:32:48.249 Boot Partition: Not Supported 00:32:48.249 Memory Page Size Minimum: 4096 bytes 00:32:48.249 Memory Page Size Maximum: 4096 bytes 00:32:48.249 Persistent Memory Region: Not Supported 00:32:48.249 Optional Asynchronous Events Supported 00:32:48.249 Namespace Attribute Notices: Not Supported 00:32:48.249 Firmware Activation Notices: Not Supported 00:32:48.249 ANA Change Notices: Not Supported 00:32:48.249 PLE Aggregate Log Change Notices: Not Supported 00:32:48.249 LBA Status Info Alert Notices: Not Supported 00:32:48.249 EGE Aggregate Log Change Notices: Not Supported 00:32:48.249 Normal NVM Subsystem Shutdown event: Not Supported 00:32:48.249 Zone Descriptor Change Notices: Not Supported 00:32:48.249 Discovery Log Change Notices: Supported 00:32:48.249 Controller Attributes 00:32:48.249 128-bit Host Identifier: Not Supported 00:32:48.249 Non-Operational Permissive Mode: Not Supported 00:32:48.249 NVM Sets: Not Supported 00:32:48.249 Read Recovery Levels: Not Supported 00:32:48.249 Endurance Groups: Not Supported 00:32:48.249 Predictable Latency Mode: Not Supported 00:32:48.249 Traffic Based Keep ALive: Not Supported 00:32:48.249 Namespace Granularity: Not Supported 00:32:48.249 SQ Associations: Not Supported 00:32:48.249 UUID List: Not Supported 00:32:48.249 Multi-Domain Subsystem: Not Supported 00:32:48.249 Fixed Capacity Management: Not Supported 00:32:48.249 Variable Capacity Management: Not Supported 00:32:48.249 Delete Endurance Group: Not Supported 00:32:48.249 Delete NVM Set: Not Supported 00:32:48.249 Extended LBA Formats Supported: Not Supported 00:32:48.249 Flexible Data Placement Supported: Not Supported 00:32:48.249 00:32:48.249 Controller Memory Buffer Support 00:32:48.249 ================================ 00:32:48.249 Supported: No 00:32:48.249 00:32:48.249 Persistent Memory Region Support 00:32:48.249 ================================ 00:32:48.249 Supported: No 00:32:48.249 00:32:48.249 Admin Command Set Attributes 00:32:48.249 ============================ 00:32:48.249 Security Send/Receive: Not Supported 00:32:48.249 Format NVM: Not Supported 00:32:48.249 Firmware Activate/Download: Not Supported 00:32:48.249 Namespace Management: Not Supported 00:32:48.249 Device Self-Test: Not Supported 00:32:48.249 Directives: Not Supported 00:32:48.249 NVMe-MI: Not Supported 00:32:48.249 Virtualization Management: Not Supported 00:32:48.249 Doorbell Buffer Config: Not Supported 00:32:48.249 Get LBA Status Capability: Not Supported 00:32:48.249 Command & Feature Lockdown Capability: Not Supported 00:32:48.249 Abort Command Limit: 1 00:32:48.249 Async Event Request Limit: 1 00:32:48.249 Number of Firmware Slots: N/A 00:32:48.249 Firmware Slot 1 Read-Only: N/A 00:32:48.249 Firmware Activation Without Reset: N/A 00:32:48.249 Multiple Update Detection Support: N/A 00:32:48.249 Firmware Update Granularity: No Information Provided 00:32:48.249 Per-Namespace SMART Log: No 00:32:48.249 Asymmetric Namespace Access Log Page: Not Supported 00:32:48.249 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:48.249 Command Effects Log Page: Not Supported 00:32:48.249 Get Log Page Extended Data: Supported 00:32:48.249 Telemetry Log Pages: Not Supported 00:32:48.249 Persistent Event Log Pages: Not Supported 00:32:48.249 Supported Log Pages Log Page: May Support 00:32:48.249 Commands Supported & Effects Log Page: Not Supported 00:32:48.249 Feature Identifiers & Effects Log Page:May Support 00:32:48.249 NVMe-MI Commands & Effects Log Page: May Support 00:32:48.249 Data Area 4 for Telemetry Log: Not Supported 00:32:48.249 Error Log Page Entries Supported: 1 00:32:48.249 Keep Alive: Not Supported 00:32:48.249 00:32:48.249 NVM Command Set Attributes 00:32:48.249 ========================== 00:32:48.249 Submission Queue Entry Size 00:32:48.249 Max: 1 00:32:48.249 Min: 1 00:32:48.249 Completion Queue Entry Size 00:32:48.249 Max: 1 00:32:48.249 Min: 1 00:32:48.249 Number of Namespaces: 0 00:32:48.249 Compare Command: Not Supported 00:32:48.249 Write Uncorrectable Command: Not Supported 00:32:48.249 Dataset Management Command: Not Supported 00:32:48.249 Write Zeroes Command: Not Supported 00:32:48.249 Set Features Save Field: Not Supported 00:32:48.249 Reservations: Not Supported 00:32:48.249 Timestamp: Not Supported 00:32:48.249 Copy: Not Supported 00:32:48.249 Volatile Write Cache: Not Present 00:32:48.249 Atomic Write Unit (Normal): 1 00:32:48.249 Atomic Write Unit (PFail): 1 00:32:48.249 Atomic Compare & Write Unit: 1 00:32:48.249 Fused Compare & Write: Not Supported 00:32:48.249 Scatter-Gather List 00:32:48.249 SGL Command Set: Supported 00:32:48.249 SGL Keyed: Not Supported 00:32:48.249 SGL Bit Bucket Descriptor: Not Supported 00:32:48.249 SGL Metadata Pointer: Not Supported 00:32:48.249 Oversized SGL: Not Supported 00:32:48.249 SGL Metadata Address: Not Supported 00:32:48.249 SGL Offset: Supported 00:32:48.249 Transport SGL Data Block: Not Supported 00:32:48.249 Replay Protected Memory Block: Not Supported 00:32:48.249 00:32:48.249 Firmware Slot Information 00:32:48.249 ========================= 00:32:48.249 Active slot: 0 00:32:48.249 00:32:48.249 00:32:48.249 Error Log 00:32:48.249 ========= 00:32:48.249 00:32:48.249 Active Namespaces 00:32:48.249 ================= 00:32:48.249 Discovery Log Page 00:32:48.249 ================== 00:32:48.249 Generation Counter: 2 00:32:48.249 Number of Records: 2 00:32:48.249 Record Format: 0 00:32:48.249 00:32:48.249 Discovery Log Entry 0 00:32:48.249 ---------------------- 00:32:48.249 Transport Type: 3 (TCP) 00:32:48.249 Address Family: 1 (IPv4) 00:32:48.249 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:48.249 Entry Flags: 00:32:48.249 Duplicate Returned Information: 0 00:32:48.249 Explicit Persistent Connection Support for Discovery: 0 00:32:48.249 Transport Requirements: 00:32:48.249 Secure Channel: Not Specified 00:32:48.249 Port ID: 1 (0x0001) 00:32:48.249 Controller ID: 65535 (0xffff) 00:32:48.249 Admin Max SQ Size: 32 00:32:48.249 Transport Service Identifier: 4420 00:32:48.249 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:48.249 Transport Address: 10.0.0.1 00:32:48.249 Discovery Log Entry 1 00:32:48.249 ---------------------- 00:32:48.249 Transport Type: 3 (TCP) 00:32:48.249 Address Family: 1 (IPv4) 00:32:48.249 Subsystem Type: 2 (NVM Subsystem) 00:32:48.249 Entry Flags: 00:32:48.249 Duplicate Returned Information: 0 00:32:48.249 Explicit Persistent Connection Support for Discovery: 0 00:32:48.249 Transport Requirements: 00:32:48.250 Secure Channel: Not Specified 00:32:48.250 Port ID: 1 (0x0001) 00:32:48.250 Controller ID: 65535 (0xffff) 00:32:48.250 Admin Max SQ Size: 32 00:32:48.250 Transport Service Identifier: 4420 00:32:48.250 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:48.250 Transport Address: 10.0.0.1 00:32:48.250 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:48.250 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.250 get_feature(0x01) failed 00:32:48.250 get_feature(0x02) failed 00:32:48.250 get_feature(0x04) failed 00:32:48.250 ===================================================== 00:32:48.250 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:48.250 ===================================================== 00:32:48.250 Controller Capabilities/Features 00:32:48.250 ================================ 00:32:48.250 Vendor ID: 0000 00:32:48.250 Subsystem Vendor ID: 0000 00:32:48.250 Serial Number: c867bc5cf7c5c89b0403 00:32:48.250 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:48.250 Firmware Version: 6.7.0-68 00:32:48.250 Recommended Arb Burst: 6 00:32:48.250 IEEE OUI Identifier: 00 00 00 00:32:48.250 Multi-path I/O 00:32:48.250 May have multiple subsystem ports: Yes 00:32:48.250 May have multiple controllers: Yes 00:32:48.250 Associated with SR-IOV VF: No 00:32:48.250 Max Data Transfer Size: Unlimited 00:32:48.250 Max Number of Namespaces: 1024 00:32:48.250 Max Number of I/O Queues: 128 00:32:48.250 NVMe Specification Version (VS): 1.3 00:32:48.250 NVMe Specification Version (Identify): 1.3 00:32:48.250 Maximum Queue Entries: 1024 00:32:48.250 Contiguous Queues Required: No 00:32:48.250 Arbitration Mechanisms Supported 00:32:48.250 Weighted Round Robin: Not Supported 00:32:48.250 Vendor Specific: Not Supported 00:32:48.250 Reset Timeout: 7500 ms 00:32:48.250 Doorbell Stride: 4 bytes 00:32:48.250 NVM Subsystem Reset: Not Supported 00:32:48.250 Command Sets Supported 00:32:48.250 NVM Command Set: Supported 00:32:48.250 Boot Partition: Not Supported 00:32:48.250 Memory Page Size Minimum: 4096 bytes 00:32:48.250 Memory Page Size Maximum: 4096 bytes 00:32:48.250 Persistent Memory Region: Not Supported 00:32:48.250 Optional Asynchronous Events Supported 00:32:48.250 Namespace Attribute Notices: Supported 00:32:48.250 Firmware Activation Notices: Not Supported 00:32:48.250 ANA Change Notices: Supported 00:32:48.250 PLE Aggregate Log Change Notices: Not Supported 00:32:48.250 LBA Status Info Alert Notices: Not Supported 00:32:48.250 EGE Aggregate Log Change Notices: Not Supported 00:32:48.250 Normal NVM Subsystem Shutdown event: Not Supported 00:32:48.250 Zone Descriptor Change Notices: Not Supported 00:32:48.250 Discovery Log Change Notices: Not Supported 00:32:48.250 Controller Attributes 00:32:48.250 128-bit Host Identifier: Supported 00:32:48.250 Non-Operational Permissive Mode: Not Supported 00:32:48.250 NVM Sets: Not Supported 00:32:48.250 Read Recovery Levels: Not Supported 00:32:48.250 Endurance Groups: Not Supported 00:32:48.250 Predictable Latency Mode: Not Supported 00:32:48.250 Traffic Based Keep ALive: Supported 00:32:48.250 Namespace Granularity: Not Supported 00:32:48.250 SQ Associations: Not Supported 00:32:48.250 UUID List: Not Supported 00:32:48.250 Multi-Domain Subsystem: Not Supported 00:32:48.250 Fixed Capacity Management: Not Supported 00:32:48.250 Variable Capacity Management: Not Supported 00:32:48.250 Delete Endurance Group: Not Supported 00:32:48.250 Delete NVM Set: Not Supported 00:32:48.250 Extended LBA Formats Supported: Not Supported 00:32:48.250 Flexible Data Placement Supported: Not Supported 00:32:48.250 00:32:48.250 Controller Memory Buffer Support 00:32:48.250 ================================ 00:32:48.250 Supported: No 00:32:48.250 00:32:48.250 Persistent Memory Region Support 00:32:48.250 ================================ 00:32:48.250 Supported: No 00:32:48.250 00:32:48.250 Admin Command Set Attributes 00:32:48.250 ============================ 00:32:48.250 Security Send/Receive: Not Supported 00:32:48.250 Format NVM: Not Supported 00:32:48.250 Firmware Activate/Download: Not Supported 00:32:48.250 Namespace Management: Not Supported 00:32:48.250 Device Self-Test: Not Supported 00:32:48.250 Directives: Not Supported 00:32:48.250 NVMe-MI: Not Supported 00:32:48.250 Virtualization Management: Not Supported 00:32:48.250 Doorbell Buffer Config: Not Supported 00:32:48.250 Get LBA Status Capability: Not Supported 00:32:48.250 Command & Feature Lockdown Capability: Not Supported 00:32:48.250 Abort Command Limit: 4 00:32:48.250 Async Event Request Limit: 4 00:32:48.250 Number of Firmware Slots: N/A 00:32:48.250 Firmware Slot 1 Read-Only: N/A 00:32:48.250 Firmware Activation Without Reset: N/A 00:32:48.250 Multiple Update Detection Support: N/A 00:32:48.250 Firmware Update Granularity: No Information Provided 00:32:48.250 Per-Namespace SMART Log: Yes 00:32:48.250 Asymmetric Namespace Access Log Page: Supported 00:32:48.250 ANA Transition Time : 10 sec 00:32:48.250 00:32:48.250 Asymmetric Namespace Access Capabilities 00:32:48.250 ANA Optimized State : Supported 00:32:48.250 ANA Non-Optimized State : Supported 00:32:48.250 ANA Inaccessible State : Supported 00:32:48.250 ANA Persistent Loss State : Supported 00:32:48.250 ANA Change State : Supported 00:32:48.250 ANAGRPID is not changed : No 00:32:48.250 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:48.250 00:32:48.250 ANA Group Identifier Maximum : 128 00:32:48.250 Number of ANA Group Identifiers : 128 00:32:48.250 Max Number of Allowed Namespaces : 1024 00:32:48.250 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:48.250 Command Effects Log Page: Supported 00:32:48.250 Get Log Page Extended Data: Supported 00:32:48.250 Telemetry Log Pages: Not Supported 00:32:48.250 Persistent Event Log Pages: Not Supported 00:32:48.250 Supported Log Pages Log Page: May Support 00:32:48.250 Commands Supported & Effects Log Page: Not Supported 00:32:48.250 Feature Identifiers & Effects Log Page:May Support 00:32:48.250 NVMe-MI Commands & Effects Log Page: May Support 00:32:48.250 Data Area 4 for Telemetry Log: Not Supported 00:32:48.250 Error Log Page Entries Supported: 128 00:32:48.250 Keep Alive: Supported 00:32:48.250 Keep Alive Granularity: 1000 ms 00:32:48.250 00:32:48.250 NVM Command Set Attributes 00:32:48.250 ========================== 00:32:48.250 Submission Queue Entry Size 00:32:48.250 Max: 64 00:32:48.250 Min: 64 00:32:48.250 Completion Queue Entry Size 00:32:48.250 Max: 16 00:32:48.250 Min: 16 00:32:48.250 Number of Namespaces: 1024 00:32:48.250 Compare Command: Not Supported 00:32:48.250 Write Uncorrectable Command: Not Supported 00:32:48.250 Dataset Management Command: Supported 00:32:48.250 Write Zeroes Command: Supported 00:32:48.250 Set Features Save Field: Not Supported 00:32:48.250 Reservations: Not Supported 00:32:48.250 Timestamp: Not Supported 00:32:48.250 Copy: Not Supported 00:32:48.250 Volatile Write Cache: Present 00:32:48.250 Atomic Write Unit (Normal): 1 00:32:48.250 Atomic Write Unit (PFail): 1 00:32:48.250 Atomic Compare & Write Unit: 1 00:32:48.250 Fused Compare & Write: Not Supported 00:32:48.250 Scatter-Gather List 00:32:48.250 SGL Command Set: Supported 00:32:48.250 SGL Keyed: Not Supported 00:32:48.250 SGL Bit Bucket Descriptor: Not Supported 00:32:48.250 SGL Metadata Pointer: Not Supported 00:32:48.250 Oversized SGL: Not Supported 00:32:48.250 SGL Metadata Address: Not Supported 00:32:48.250 SGL Offset: Supported 00:32:48.250 Transport SGL Data Block: Not Supported 00:32:48.250 Replay Protected Memory Block: Not Supported 00:32:48.250 00:32:48.250 Firmware Slot Information 00:32:48.250 ========================= 00:32:48.250 Active slot: 0 00:32:48.250 00:32:48.250 Asymmetric Namespace Access 00:32:48.250 =========================== 00:32:48.250 Change Count : 0 00:32:48.250 Number of ANA Group Descriptors : 1 00:32:48.250 ANA Group Descriptor : 0 00:32:48.250 ANA Group ID : 1 00:32:48.250 Number of NSID Values : 1 00:32:48.250 Change Count : 0 00:32:48.250 ANA State : 1 00:32:48.251 Namespace Identifier : 1 00:32:48.251 00:32:48.251 Commands Supported and Effects 00:32:48.251 ============================== 00:32:48.251 Admin Commands 00:32:48.251 -------------- 00:32:48.251 Get Log Page (02h): Supported 00:32:48.251 Identify (06h): Supported 00:32:48.251 Abort (08h): Supported 00:32:48.251 Set Features (09h): Supported 00:32:48.251 Get Features (0Ah): Supported 00:32:48.251 Asynchronous Event Request (0Ch): Supported 00:32:48.251 Keep Alive (18h): Supported 00:32:48.251 I/O Commands 00:32:48.251 ------------ 00:32:48.251 Flush (00h): Supported 00:32:48.251 Write (01h): Supported LBA-Change 00:32:48.251 Read (02h): Supported 00:32:48.251 Write Zeroes (08h): Supported LBA-Change 00:32:48.251 Dataset Management (09h): Supported 00:32:48.251 00:32:48.251 Error Log 00:32:48.251 ========= 00:32:48.251 Entry: 0 00:32:48.251 Error Count: 0x3 00:32:48.251 Submission Queue Id: 0x0 00:32:48.251 Command Id: 0x5 00:32:48.251 Phase Bit: 0 00:32:48.251 Status Code: 0x2 00:32:48.251 Status Code Type: 0x0 00:32:48.251 Do Not Retry: 1 00:32:48.251 Error Location: 0x28 00:32:48.251 LBA: 0x0 00:32:48.251 Namespace: 0x0 00:32:48.251 Vendor Log Page: 0x0 00:32:48.251 ----------- 00:32:48.251 Entry: 1 00:32:48.251 Error Count: 0x2 00:32:48.251 Submission Queue Id: 0x0 00:32:48.251 Command Id: 0x5 00:32:48.251 Phase Bit: 0 00:32:48.251 Status Code: 0x2 00:32:48.251 Status Code Type: 0x0 00:32:48.251 Do Not Retry: 1 00:32:48.251 Error Location: 0x28 00:32:48.251 LBA: 0x0 00:32:48.251 Namespace: 0x0 00:32:48.251 Vendor Log Page: 0x0 00:32:48.251 ----------- 00:32:48.251 Entry: 2 00:32:48.251 Error Count: 0x1 00:32:48.251 Submission Queue Id: 0x0 00:32:48.251 Command Id: 0x4 00:32:48.251 Phase Bit: 0 00:32:48.251 Status Code: 0x2 00:32:48.251 Status Code Type: 0x0 00:32:48.251 Do Not Retry: 1 00:32:48.251 Error Location: 0x28 00:32:48.251 LBA: 0x0 00:32:48.251 Namespace: 0x0 00:32:48.251 Vendor Log Page: 0x0 00:32:48.251 00:32:48.251 Number of Queues 00:32:48.251 ================ 00:32:48.251 Number of I/O Submission Queues: 128 00:32:48.251 Number of I/O Completion Queues: 128 00:32:48.251 00:32:48.251 ZNS Specific Controller Data 00:32:48.251 ============================ 00:32:48.251 Zone Append Size Limit: 0 00:32:48.251 00:32:48.251 00:32:48.251 Active Namespaces 00:32:48.251 ================= 00:32:48.251 get_feature(0x05) failed 00:32:48.251 Namespace ID:1 00:32:48.251 Command Set Identifier: NVM (00h) 00:32:48.251 Deallocate: Supported 00:32:48.251 Deallocated/Unwritten Error: Not Supported 00:32:48.251 Deallocated Read Value: Unknown 00:32:48.251 Deallocate in Write Zeroes: Not Supported 00:32:48.251 Deallocated Guard Field: 0xFFFF 00:32:48.251 Flush: Supported 00:32:48.251 Reservation: Not Supported 00:32:48.251 Namespace Sharing Capabilities: Multiple Controllers 00:32:48.251 Size (in LBAs): 1953525168 (931GiB) 00:32:48.251 Capacity (in LBAs): 1953525168 (931GiB) 00:32:48.251 Utilization (in LBAs): 1953525168 (931GiB) 00:32:48.251 UUID: 6858adb2-788f-4862-859c-b26b4cbb143a 00:32:48.251 Thin Provisioning: Not Supported 00:32:48.251 Per-NS Atomic Units: Yes 00:32:48.251 Atomic Boundary Size (Normal): 0 00:32:48.251 Atomic Boundary Size (PFail): 0 00:32:48.251 Atomic Boundary Offset: 0 00:32:48.251 NGUID/EUI64 Never Reused: No 00:32:48.251 ANA group ID: 1 00:32:48.251 Namespace Write Protected: No 00:32:48.251 Number of LBA Formats: 1 00:32:48.251 Current LBA Format: LBA Format #00 00:32:48.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:48.251 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:48.251 rmmod nvme_tcp 00:32:48.251 rmmod nvme_fabrics 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.251 18:20:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:50.782 18:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:51.719 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:51.719 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:51.719 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:52.656 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:52.656 00:32:52.656 real 0m9.768s 00:32:52.656 user 0m2.088s 00:32:52.656 sys 0m3.561s 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.656 ************************************ 00:32:52.656 END TEST nvmf_identify_kernel_target 00:32:52.656 ************************************ 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.656 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.914 ************************************ 00:32:52.914 START TEST nvmf_auth_host 00:32:52.914 ************************************ 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:52.914 * Looking for test storage... 00:32:52.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.914 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:52.915 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:54.817 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:54.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:54.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:54.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:54.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.818 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:55.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:32:55.076 00:32:55.076 --- 10.0.0.2 ping statistics --- 00:32:55.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.076 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:32:55.076 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:32:55.076 00:32:55.076 --- 10.0.0.1 ping statistics --- 00:32:55.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.077 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2485476 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2485476 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2485476 ']' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:55.077 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3ed6cb6e9468225af4623d403f041795 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kE3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3ed6cb6e9468225af4623d403f041795 0 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3ed6cb6e9468225af4623d403f041795 0 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3ed6cb6e9468225af4623d403f041795 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kE3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kE3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kE3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b802301bf21e0bf7a41dfa2be677273b476ae74b8f1195f754885c7b477e4868 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.p5J 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b802301bf21e0bf7a41dfa2be677273b476ae74b8f1195f754885c7b477e4868 3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b802301bf21e0bf7a41dfa2be677273b476ae74b8f1195f754885c7b477e4868 3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b802301bf21e0bf7a41dfa2be677273b476ae74b8f1195f754885c7b477e4868 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:55.335 18:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.p5J 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.p5J 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.p5J 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=120ecca28753e3cca23234800991a336a96e28919fc3f0d6 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zqv 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 120ecca28753e3cca23234800991a336a96e28919fc3f0d6 0 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 120ecca28753e3cca23234800991a336a96e28919fc3f0d6 0 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=120ecca28753e3cca23234800991a336a96e28919fc3f0d6 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zqv 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zqv 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Zqv 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=62bcad67515a175ab03c695a45f9d529d81eb2b8e4268297 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PTc 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 62bcad67515a175ab03c695a45f9d529d81eb2b8e4268297 2 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 62bcad67515a175ab03c695a45f9d529d81eb2b8e4268297 2 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=62bcad67515a175ab03c695a45f9d529d81eb2b8e4268297 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PTc 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PTc 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PTc 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93ac6ad393d703113434bf61d07e544b 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MFF 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93ac6ad393d703113434bf61d07e544b 1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93ac6ad393d703113434bf61d07e544b 1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93ac6ad393d703113434bf61d07e544b 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MFF 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MFF 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MFF 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c84a69c1553d093ec6fc2ab9624132f 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BJ1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c84a69c1553d093ec6fc2ab9624132f 1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c84a69c1553d093ec6fc2ab9624132f 1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c84a69c1553d093ec6fc2ab9624132f 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BJ1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BJ1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BJ1 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.594 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4b3a1469cd15cee96a075704f57d09e2f81f5b03c0173ced 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GhL 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4b3a1469cd15cee96a075704f57d09e2f81f5b03c0173ced 2 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4b3a1469cd15cee96a075704f57d09e2f81f5b03c0173ced 2 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4b3a1469cd15cee96a075704f57d09e2f81f5b03c0173ced 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:55.595 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GhL 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GhL 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GhL 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0fd6a2eab84cb77a915f3275c0b7279 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BSB 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0fd6a2eab84cb77a915f3275c0b7279 0 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0fd6a2eab84cb77a915f3275c0b7279 0 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0fd6a2eab84cb77a915f3275c0b7279 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BSB 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BSB 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BSB 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=249c13c3bda60fe7e967472f259b56bc74a236a56719c96aeefff3223edd806b 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RPE 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 249c13c3bda60fe7e967472f259b56bc74a236a56719c96aeefff3223edd806b 3 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 249c13c3bda60fe7e967472f259b56bc74a236a56719c96aeefff3223edd806b 3 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=249c13c3bda60fe7e967472f259b56bc74a236a56719c96aeefff3223edd806b 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RPE 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RPE 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.RPE 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2485476 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2485476 ']' 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:55.852 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kE3 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.p5J ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.p5J 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Zqv 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PTc ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PTc 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MFF 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BJ1 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BJ1 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GhL 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BSB ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BSB 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.RPE 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.110 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:56.111 18:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:57.044 Waiting for block devices as requested 00:32:57.302 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:57.302 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:57.560 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:57.560 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:57.560 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:57.818 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:57.818 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:57.818 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:57.818 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:58.075 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:58.075 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:58.075 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:58.075 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:58.333 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:58.333 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:58.333 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:58.333 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:58.899 No valid GPT data, bailing 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:58.899 00:32:58.899 Discovery Log Number of Records 2, Generation counter 2 00:32:58.899 =====Discovery Log Entry 0====== 00:32:58.899 trtype: tcp 00:32:58.899 adrfam: ipv4 00:32:58.899 subtype: current discovery subsystem 00:32:58.899 treq: not specified, sq flow control disable supported 00:32:58.899 portid: 1 00:32:58.899 trsvcid: 4420 00:32:58.899 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:58.899 traddr: 10.0.0.1 00:32:58.899 eflags: none 00:32:58.899 sectype: none 00:32:58.899 =====Discovery Log Entry 1====== 00:32:58.899 trtype: tcp 00:32:58.899 adrfam: ipv4 00:32:58.899 subtype: nvme subsystem 00:32:58.899 treq: not specified, sq flow control disable supported 00:32:58.899 portid: 1 00:32:58.899 trsvcid: 4420 00:32:58.899 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:58.899 traddr: 10.0.0.1 00:32:58.899 eflags: none 00:32:58.899 sectype: none 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.899 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.900 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.158 nvme0n1 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.158 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.416 nvme0n1 00:32:59.416 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.417 18:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 nvme0n1 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.675 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.676 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 nvme0n1 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 nvme0n1 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.934 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.192 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.193 nvme0n1 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.193 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.451 18:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.451 nvme0n1 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.451 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.708 nvme0n1 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.708 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.966 nvme0n1 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.966 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.224 nvme0n1 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.224 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.482 18:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.482 nvme0n1 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.482 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.483 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.741 nvme0n1 00:33:01.741 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.999 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.257 nvme0n1 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.257 18:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.515 nvme0n1 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.515 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.516 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.773 nvme0n1 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.773 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.031 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.032 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.290 nvme0n1 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.290 18:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.861 nvme0n1 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.861 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.862 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.476 nvme0n1 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.476 18:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.734 nvme0n1 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.992 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.558 nvme0n1 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.558 18:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.558 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.122 nvme0n1 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.122 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.123 18:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.053 nvme0n1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.053 18:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 nvme0n1 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 18:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 nvme0n1 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 18:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.485 nvme0n1 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.485 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.486 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.743 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 nvme0n1 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.566 18:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.566 nvme0n1 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.566 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.567 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.567 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.567 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:10.824 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.825 nvme0n1 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.825 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 nvme0n1 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.083 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.341 nvme0n1 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.341 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.342 18:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.600 nvme0n1 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.600 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.857 nvme0n1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.857 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.115 nvme0n1 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.115 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 nvme0n1 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 18:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 nvme0n1 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.631 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.889 nvme0n1 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.889 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.147 nvme0n1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.147 18:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.405 nvme0n1 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.405 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.662 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.663 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.921 nvme0n1 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.921 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.179 nvme0n1 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.179 18:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.437 nvme0n1 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.437 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.695 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 nvme0n1 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.260 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.261 18:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.518 nvme0n1 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.518 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.775 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.775 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.775 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.775 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.776 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.033 nvme0n1 00:33:16.033 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.033 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.033 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.033 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.033 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.290 18:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.855 nvme0n1 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.855 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.113 nvme0n1 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.113 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.370 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.371 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.371 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.371 18:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.304 nvme0n1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.304 18:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.237 nvme0n1 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.237 18:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.832 nvme0n1 00:33:19.832 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.832 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.832 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.832 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.832 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:20.090 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.091 18:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.024 nvme0n1 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.024 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.025 18:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.957 nvme0n1 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:21.957 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 nvme0n1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.958 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.216 nvme0n1 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.216 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.217 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 nvme0n1 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.475 18:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 nvme0n1 00:33:22.475 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.475 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.475 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.475 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.475 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:22.733 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.734 nvme0n1 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.734 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 nvme0n1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.993 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.251 nvme0n1 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.251 18:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.509 nvme0n1 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.509 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.767 nvme0n1 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.767 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.768 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.026 nvme0n1 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.026 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.284 nvme0n1 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.284 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.542 18:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.542 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.801 nvme0n1 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.801 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.059 nvme0n1 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.059 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.060 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.317 nvme0n1 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.317 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.575 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.575 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.575 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.575 18:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.575 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.833 nvme0n1 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.833 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.398 nvme0n1 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.398 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.399 18:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.964 nvme0n1 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.964 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.965 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.530 nvme0n1 00:33:27.530 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.530 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.530 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.530 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.531 18:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.096 nvme0n1 00:33:28.096 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.096 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.096 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.096 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.096 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.097 18:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.355 nvme0n1 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.355 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.612 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.612 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.612 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VkNmNiNmU5NDY4MjI1YWY0NjIzZDQwM2YwNDE3OTVqxvDp: 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjgwMjMwMWJmMjFlMGJmN2E0MWRmYTJiZTY3NzI3M2I0NzZhZTc0YjhmMTE5NWY3NTQ4ODVjN2I0NzdlNDg2OJTPBDs=: 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.613 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.544 nvme0n1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.544 18:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.477 nvme0n1 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNhYzZhZDM5M2Q3MDMxMTM0MzRiZjYxZDA3ZTU0NGJpjKhW: 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM4NGE2OWMxNTUzZDA5M2VjNmZjMmFiOTYyNDEzMmZ5xv0j: 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.477 18:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.409 nvme0n1 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGIzYTE0NjljZDE1Y2VlOTZhMDc1NzA0ZjU3ZDA5ZTJmODFmNWIwM2MwMTczY2VkdBD6xA==: 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBmZDZhMmVhYjg0Y2I3N2E5MTVmMzI3NWMwYjcyNzmrzRj5: 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.409 18:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.339 nvme0n1 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjQ5YzEzYzNiZGE2MGZlN2U5Njc0NzJmMjU5YjU2YmM3NGEyMzZhNTY3MTljOTZhZWVmZmYzMjIzZWRkODA2YsuNwyo=: 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.339 18:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.267 nvme0n1 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTIwZWNjYTI4NzUzZTNjY2EyMzIzNDgwMDk5MWEzMzZhOTZlMjg5MTlmYzNmMGQ2C832ZQ==: 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjJiY2FkNjc1MTVhMTc1YWIwM2M2OTVhNDVmOWQ1MjlkODFlYjJiOGU0MjY4Mjk3fUFHcA==: 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.267 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.268 request: 00:33:33.268 { 00:33:33.268 "name": "nvme0", 00:33:33.268 "trtype": "tcp", 00:33:33.268 "traddr": "10.0.0.1", 00:33:33.268 "adrfam": "ipv4", 00:33:33.268 "trsvcid": "4420", 00:33:33.268 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:33.268 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:33.268 "prchk_reftag": false, 00:33:33.268 "prchk_guard": false, 00:33:33.268 "hdgst": false, 00:33:33.268 "ddgst": false, 00:33:33.268 "method": "bdev_nvme_attach_controller", 00:33:33.268 "req_id": 1 00:33:33.268 } 00:33:33.268 Got JSON-RPC error response 00:33:33.268 response: 00:33:33.268 { 00:33:33.268 "code": -5, 00:33:33.268 "message": "Input/output error" 00:33:33.268 } 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.268 request: 00:33:33.268 { 00:33:33.268 "name": "nvme0", 00:33:33.268 "trtype": "tcp", 00:33:33.268 "traddr": "10.0.0.1", 00:33:33.268 "adrfam": "ipv4", 00:33:33.268 "trsvcid": "4420", 00:33:33.268 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:33.268 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:33.268 "prchk_reftag": false, 00:33:33.268 "prchk_guard": false, 00:33:33.268 "hdgst": false, 00:33:33.268 "ddgst": false, 00:33:33.268 "dhchap_key": "key2", 00:33:33.268 "method": "bdev_nvme_attach_controller", 00:33:33.268 "req_id": 1 00:33:33.268 } 00:33:33.268 Got JSON-RPC error response 00:33:33.268 response: 00:33:33.268 { 00:33:33.268 "code": -5, 00:33:33.268 "message": "Input/output error" 00:33:33.268 } 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.268 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.525 request: 00:33:33.525 { 00:33:33.525 "name": "nvme0", 00:33:33.525 "trtype": "tcp", 00:33:33.525 "traddr": "10.0.0.1", 00:33:33.525 "adrfam": "ipv4", 00:33:33.525 "trsvcid": "4420", 00:33:33.525 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:33.525 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:33.525 "prchk_reftag": false, 00:33:33.525 "prchk_guard": false, 00:33:33.525 "hdgst": false, 00:33:33.525 "ddgst": false, 00:33:33.525 "dhchap_key": "key1", 00:33:33.525 "dhchap_ctrlr_key": "ckey2", 00:33:33.526 "method": "bdev_nvme_attach_controller", 00:33:33.526 "req_id": 1 00:33:33.526 } 00:33:33.526 Got JSON-RPC error response 00:33:33.526 response: 00:33:33.526 { 00:33:33.526 "code": -5, 00:33:33.526 "message": "Input/output error" 00:33:33.526 } 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:33.526 rmmod nvme_tcp 00:33:33.526 rmmod nvme_fabrics 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2485476 ']' 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2485476 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2485476 ']' 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2485476 00:33:33.526 18:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485476 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485476' 00:33:33.526 killing process with pid 2485476 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2485476 00:33:33.526 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2485476 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.784 18:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:35.683 18:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.107 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.107 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.107 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.107 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.107 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.108 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.108 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.108 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.108 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:38.046 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:38.046 18:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kE3 /tmp/spdk.key-null.Zqv /tmp/spdk.key-sha256.MFF /tmp/spdk.key-sha384.GhL /tmp/spdk.key-sha512.RPE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:38.046 18:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:39.420 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:39.420 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:39.420 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:39.420 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:39.420 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:39.420 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:39.420 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:39.420 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:39.420 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:39.420 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:39.420 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:39.420 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:39.420 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:39.420 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:39.420 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:39.420 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:39.420 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:39.420 00:33:39.420 real 0m46.719s 00:33:39.420 user 0m43.453s 00:33:39.420 sys 0m5.775s 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.420 ************************************ 00:33:39.420 END TEST nvmf_auth_host 00:33:39.420 ************************************ 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.420 18:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.679 ************************************ 00:33:39.679 START TEST nvmf_digest 00:33:39.679 ************************************ 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:39.679 * Looking for test storage... 00:33:39.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:39.679 18:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:41.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:41.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:41.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:41.583 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.583 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:41.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:33:41.842 00:33:41.842 --- 10.0.0.2 ping statistics --- 00:33:41.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.842 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:33:41.842 00:33:41.842 --- 10.0.0.1 ping statistics --- 00:33:41.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.842 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 ************************************ 00:33:41.842 START TEST nvmf_digest_clean 00:33:41.842 ************************************ 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2495026 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2495026 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2495026 ']' 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.842 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.842 [2024-07-23 18:21:49.410262] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:41.842 [2024-07-23 18:21:49.410359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.842 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.842 [2024-07-23 18:21:49.477474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.101 [2024-07-23 18:21:49.561163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.101 [2024-07-23 18:21:49.561213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.101 [2024-07-23 18:21:49.561226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.101 [2024-07-23 18:21:49.561237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.101 [2024-07-23 18:21:49.561246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.101 [2024-07-23 18:21:49.561270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.101 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.101 null0 00:33:42.101 [2024-07-23 18:21:49.736180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.101 [2024-07-23 18:21:49.760472] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2495055 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2495055 /var/tmp/bperf.sock 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2495055 ']' 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:42.359 18:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.359 [2024-07-23 18:21:49.805149] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:42.359 [2024-07-23 18:21:49.805212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495055 ] 00:33:42.359 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.359 [2024-07-23 18:21:49.863359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.359 [2024-07-23 18:21:49.946994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.359 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.359 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:42.359 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:42.359 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:42.359 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:42.925 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.925 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.182 nvme0n1 00:33:43.182 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:43.182 18:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.440 Running I/O for 2 seconds... 00:33:45.339 00:33:45.339 Latency(us) 00:33:45.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:45.339 nvme0n1 : 2.04 18965.55 74.08 0.00 0.00 6604.05 2912.71 45244.11 00:33:45.339 =================================================================================================================== 00:33:45.339 Total : 18965.55 74.08 0.00 0.00 6604.05 2912.71 45244.11 00:33:45.339 0 00:33:45.339 18:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:45.339 18:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:45.339 18:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:45.339 18:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:45.339 18:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:45.339 | select(.opcode=="crc32c") 00:33:45.339 | "\(.module_name) \(.executed)"' 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2495055 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2495055 ']' 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2495055 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.596 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2495055 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2495055' 00:33:45.854 killing process with pid 2495055 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2495055 00:33:45.854 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.854 00:33:45.854 Latency(us) 00:33:45.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.854 =================================================================================================================== 00:33:45.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2495055 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2495460 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2495460 /var/tmp/bperf.sock 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2495460 ']' 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.854 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:45.854 [2024-07-23 18:21:53.511375] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:45.854 [2024-07-23 18:21:53.511450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495460 ] 00:33:45.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.854 Zero copy mechanism will not be used. 00:33:46.111 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.111 [2024-07-23 18:21:53.573344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.111 [2024-07-23 18:21:53.661345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.111 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:46.111 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:46.111 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:46.111 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:46.111 18:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:46.676 18:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.676 18:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.933 nvme0n1 00:33:46.933 18:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:46.933 18:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:47.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.192 Zero copy mechanism will not be used. 00:33:47.192 Running I/O for 2 seconds... 00:33:49.091 00:33:49.091 Latency(us) 00:33:49.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:49.091 nvme0n1 : 2.00 5942.35 742.79 0.00 0.00 2688.20 694.80 5315.70 00:33:49.091 =================================================================================================================== 00:33:49.091 Total : 5942.35 742.79 0.00 0.00 2688.20 694.80 5315.70 00:33:49.091 0 00:33:49.091 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:49.091 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:49.091 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:49.091 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:49.091 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:49.091 | select(.opcode=="crc32c") 00:33:49.091 | "\(.module_name) \(.executed)"' 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2495460 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2495460 ']' 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2495460 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2495460 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2495460' 00:33:49.349 killing process with pid 2495460 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2495460 00:33:49.349 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.349 00:33:49.349 Latency(us) 00:33:49.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.349 =================================================================================================================== 00:33:49.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.349 18:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2495460 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2495983 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2495983 /var/tmp/bperf.sock 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2495983 ']' 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:49.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.607 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:49.607 [2024-07-23 18:21:57.169711] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:49.607 [2024-07-23 18:21:57.169800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495983 ] 00:33:49.607 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.607 [2024-07-23 18:21:57.228034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.864 [2024-07-23 18:21:57.309228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.864 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.864 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:49.864 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:49.864 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:49.864 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:50.122 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.122 18:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.688 nvme0n1 00:33:50.688 18:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:50.688 18:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.688 Running I/O for 2 seconds... 00:33:53.214 00:33:53.214 Latency(us) 00:33:53.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.214 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.214 nvme0n1 : 2.01 22045.38 86.11 0.00 0.00 5800.05 2609.30 15728.64 00:33:53.214 =================================================================================================================== 00:33:53.214 Total : 22045.38 86.11 0.00 0.00 5800.05 2609.30 15728.64 00:33:53.214 0 00:33:53.214 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:53.214 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:53.214 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:53.214 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:53.214 | select(.opcode=="crc32c") 00:33:53.214 | "\(.module_name) \(.executed)"' 00:33:53.214 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2495983 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2495983 ']' 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2495983 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2495983 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2495983' 00:33:53.215 killing process with pid 2495983 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2495983 00:33:53.215 Received shutdown signal, test time was about 2.000000 seconds 00:33:53.215 00:33:53.215 Latency(us) 00:33:53.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.215 =================================================================================================================== 00:33:53.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2495983 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2496392 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2496392 /var/tmp/bperf.sock 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2496392 ']' 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:53.215 18:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:53.473 [2024-07-23 18:22:00.885016] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:53.474 [2024-07-23 18:22:00.885105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496392 ] 00:33:53.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.474 Zero copy mechanism will not be used. 00:33:53.474 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.474 [2024-07-23 18:22:00.943644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.474 [2024-07-23 18:22:01.027267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.474 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:53.474 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:53.474 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:53.474 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:53.474 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:54.037 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.037 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.293 nvme0n1 00:33:54.293 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:54.293 18:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:54.549 Zero copy mechanism will not be used. 00:33:54.549 Running I/O for 2 seconds... 00:33:56.484 00:33:56.485 Latency(us) 00:33:56.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.485 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:56.485 nvme0n1 : 2.00 5937.58 742.20 0.00 0.00 2688.16 2075.31 11019.76 00:33:56.485 =================================================================================================================== 00:33:56.485 Total : 5937.58 742.20 0.00 0.00 2688.16 2075.31 11019.76 00:33:56.485 0 00:33:56.485 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:56.485 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:56.485 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:56.485 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:56.485 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:56.485 | select(.opcode=="crc32c") 00:33:56.485 | "\(.module_name) \(.executed)"' 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2496392 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2496392 ']' 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2496392 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2496392 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2496392' 00:33:56.741 killing process with pid 2496392 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2496392 00:33:56.741 Received shutdown signal, test time was about 2.000000 seconds 00:33:56.741 00:33:56.741 Latency(us) 00:33:56.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.741 =================================================================================================================== 00:33:56.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.741 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2496392 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2495026 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2495026 ']' 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2495026 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2495026 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2495026' 00:33:56.998 killing process with pid 2495026 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2495026 00:33:56.998 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2495026 00:33:57.255 00:33:57.255 real 0m15.375s 00:33:57.255 user 0m30.556s 00:33:57.255 sys 0m4.107s 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:57.255 ************************************ 00:33:57.255 END TEST nvmf_digest_clean 00:33:57.255 ************************************ 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.255 ************************************ 00:33:57.255 START TEST nvmf_digest_error 00:33:57.255 ************************************ 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2496834 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2496834 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2496834 ']' 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.255 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:57.256 18:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.256 [2024-07-23 18:22:04.838587] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:57.256 [2024-07-23 18:22:04.838673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.256 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.256 [2024-07-23 18:22:04.903085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.513 [2024-07-23 18:22:04.988928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.513 [2024-07-23 18:22:04.988984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.513 [2024-07-23 18:22:04.989007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.513 [2024-07-23 18:22:04.989018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.513 [2024-07-23 18:22:04.989028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.513 [2024-07-23 18:22:04.989053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.513 [2024-07-23 18:22:05.077668] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.513 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.771 null0 00:33:57.771 [2024-07-23 18:22:05.188548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.771 [2024-07-23 18:22:05.212763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2496969 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2496969 /var/tmp/bperf.sock 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2496969 ']' 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:57.771 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.771 [2024-07-23 18:22:05.256621] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:33:57.771 [2024-07-23 18:22:05.256699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496969 ] 00:33:57.771 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.771 [2024-07-23 18:22:05.313361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.771 [2024-07-23 18:22:05.396475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.029 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:58.029 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:58.029 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.029 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.286 18:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.851 nvme0n1 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:58.851 18:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:58.851 Running I/O for 2 seconds... 00:33:58.851 [2024-07-23 18:22:06.339324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.851 [2024-07-23 18:22:06.339396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-07-23 18:22:06.339428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.851 [2024-07-23 18:22:06.354183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.851 [2024-07-23 18:22:06.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-07-23 18:22:06.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.851 [2024-07-23 18:22:06.366611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.851 [2024-07-23 18:22:06.366656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-07-23 18:22:06.366679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.379735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.379765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.394174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.394207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.394224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.405377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.405407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.405430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.420525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.420554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.420571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.436353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.436397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.436413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.452485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.452513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.452544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.466431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.466463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.466483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.477823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.477851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.477868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.491385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.491414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.491429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.852 [2024-07-23 18:22:06.506747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:58.852 [2024-07-23 18:22:06.506788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-07-23 18:22:06.506834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.521409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.521442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.521470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.532776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.532825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.547276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.547329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.547349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.562211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.562242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.562266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.577992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.578023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.578040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.589593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.589636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.605242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.605289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.619993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.620040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.633811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.633842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.633860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.644821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.644850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.644879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.661004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.661032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.661052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.671670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.671714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.671730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.686874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.686902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.686919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.696996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.697023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.697042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.711043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.711074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.711091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.724660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.724690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.724709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.736027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.736078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.748048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.748076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.748092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.110 [2024-07-23 18:22:06.761372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.110 [2024-07-23 18:22:06.761403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.110 [2024-07-23 18:22:06.761422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.774481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.774515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.774548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.786125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.786174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.798392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.798435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.798454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.812585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.812637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.825490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.825518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.825535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.836725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.836774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.849867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.849897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.863129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.863158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.863175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.874384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.874416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.874434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.887174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.887204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.887222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.898553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.898607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.898623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.913678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.913706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.913745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.924211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.924238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.924260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.941024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.941054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.941075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.952135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.952165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.952192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.968163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.968194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.968222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.980736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.980765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:06.992657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:06.992702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:06.992722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:07.005879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:07.005907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:07.005927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.369 [2024-07-23 18:22:07.017624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.369 [2024-07-23 18:22:07.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.369 [2024-07-23 18:22:07.017685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.031662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.031693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.031724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.044488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.044518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.044534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.055391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.055419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.055438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.071326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.071378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.071399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.087023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.087052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.087066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.102624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.102655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.102686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.116333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.116365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.116382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.128223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.128254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.128271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.142052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.142094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.142110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.156806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.156836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.156852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.169132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.169162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.169178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.181548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.627 [2024-07-23 18:22:07.181578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.627 [2024-07-23 18:22:07.181599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.627 [2024-07-23 18:22:07.195352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.195382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.195398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.205493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.205522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.205539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.220853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.220885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.220902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.234335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.234380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.234396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.245285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.245335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.245352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.261284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.261312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.261350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.628 [2024-07-23 18:22:07.273839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.628 [2024-07-23 18:22:07.273869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.628 [2024-07-23 18:22:07.273885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.287981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.288014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.288031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.299133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.299168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.299184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.315198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.315227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.315243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.327458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.327487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.327503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.341126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.341158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.341175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.352324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.352354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.352379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.366880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.366910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.366926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.381056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.381102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.391711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.391739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.391754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.406926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.406956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.421196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.421224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.886 [2024-07-23 18:22:07.421240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.886 [2024-07-23 18:22:07.431645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.886 [2024-07-23 18:22:07.431673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.431688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.446645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.446674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.446689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.461384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.461412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.461427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.473672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.473700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.473716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.485897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.485925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.485941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.498867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.498912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.498928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.509918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.509963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.509979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.522386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.522415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.522436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.887 [2024-07-23 18:22:07.533918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:33:59.887 [2024-07-23 18:22:07.533945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.887 [2024-07-23 18:22:07.533976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.144 [2024-07-23 18:22:07.548923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.144 [2024-07-23 18:22:07.548956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.144 [2024-07-23 18:22:07.548973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.144 [2024-07-23 18:22:07.560818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.144 [2024-07-23 18:22:07.560848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.560864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.575153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.575185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.589965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.589994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.590009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.602903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.602934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.602950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.614333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.614364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.614380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.628414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.628459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.628476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.643068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.643104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.643121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.655266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.655294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.655332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.667746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.667774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.667788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.681394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.681422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.681437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.693839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.693868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.693883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.705652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.705698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.705714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.719814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.719860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.731948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.731980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.731997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.744045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.744075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.744092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.755292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.755342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.755358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.768488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.768532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.781061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.781092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.781109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.145 [2024-07-23 18:22:07.794133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.145 [2024-07-23 18:22:07.794163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.145 [2024-07-23 18:22:07.794180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.807439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.807493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.819737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.819768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.819784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.830181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.830210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.830225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.844849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.844880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.844897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.858945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.858977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.858993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.869118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.869149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.869164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.884146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.884175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.884190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.896571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.896599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.896629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.909151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.909180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.909195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.920295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.920344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.920360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.933021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.933053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.933085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.946221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.946268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.960291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.960329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.960364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.971133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.971162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.971178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:07.987046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:07.987075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:07.987091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:08.001872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:08.001917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:08.001934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:08.013132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:08.013161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:08.013176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:08.027621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:08.027652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:08.027668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:08.039512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:08.039541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:08.039557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.403 [2024-07-23 18:22:08.054234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.403 [2024-07-23 18:22:08.054262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.403 [2024-07-23 18:22:08.054277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.066833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.066863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.066878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.081531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.081562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.081584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.093386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.093416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.093432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.107175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.107203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.107220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.120591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.120634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.120649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.131234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.131261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.131279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.144159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.660 [2024-07-23 18:22:08.144186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.660 [2024-07-23 18:22:08.144216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.660 [2024-07-23 18:22:08.160395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.160425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.160442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.171488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.171516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.171538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.185827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.185857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.185874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.201433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.201467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.201484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.215449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.215479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.215498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.227037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.227064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.227083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.241867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.241911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.241929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.256184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.256213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.256233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.267706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.267749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.267770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.280868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.280895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.280915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.293528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.293557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.293577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.306211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.306240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.306257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.661 [2024-07-23 18:22:08.318732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bbc720) 00:34:00.661 [2024-07-23 18:22:08.318777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.661 [2024-07-23 18:22:08.318796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.918 00:34:00.918 Latency(us) 00:34:00.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.918 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:00.918 nvme0n1 : 2.00 19325.62 75.49 0.00 0.00 6614.14 3543.80 21456.97 00:34:00.918 =================================================================================================================== 00:34:00.918 Total : 19325.62 75.49 0.00 0.00 6614.14 3543.80 21456.97 00:34:00.918 0 00:34:00.918 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.918 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.918 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.918 | .driver_specific 00:34:00.918 | .nvme_error 00:34:00.918 | .status_code 00:34:00.918 | .command_transient_transport_error' 00:34:00.918 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2496969 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2496969 ']' 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2496969 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2496969 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2496969' 00:34:01.175 killing process with pid 2496969 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2496969 00:34:01.175 Received shutdown signal, test time was about 2.000000 seconds 00:34:01.175 00:34:01.175 Latency(us) 00:34:01.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.175 =================================================================================================================== 00:34:01.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2496969 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2497379 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2497379 /var/tmp/bperf.sock 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2497379 ']' 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:01.175 18:22:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.433 [2024-07-23 18:22:08.858611] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:01.433 [2024-07-23 18:22:08.858699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497379 ] 00:34:01.433 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.433 Zero copy mechanism will not be used. 00:34:01.433 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.433 [2024-07-23 18:22:08.918665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.433 [2024-07-23 18:22:09.004745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.691 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.691 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:01.691 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.691 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.948 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.205 nvme0n1 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:02.205 18:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:02.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:02.463 Zero copy mechanism will not be used. 00:34:02.463 Running I/O for 2 seconds... 00:34:02.463 [2024-07-23 18:22:09.993448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:09.993504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:09.993524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.000210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.000242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.000259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.007056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.007096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.007116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.013262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.013297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.013324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.019726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.019775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.026125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.026157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.026174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.032692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.032726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.032746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.039117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.039157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.039176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.045560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.045594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.463 [2024-07-23 18:22:10.045612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.463 [2024-07-23 18:22:10.052051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.463 [2024-07-23 18:22:10.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.052100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.058469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.058502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.058521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.065038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.065070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.065086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.071300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.071356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.077702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.077749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.077766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.084356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.084421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.090754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.090795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.090817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.097571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.097620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.097638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.104859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.104893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.104926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.113077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.113110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.113128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.464 [2024-07-23 18:22:10.120892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.464 [2024-07-23 18:22:10.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.464 [2024-07-23 18:22:10.120963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.723 [2024-07-23 18:22:10.128126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.723 [2024-07-23 18:22:10.128159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.723 [2024-07-23 18:22:10.128177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.723 [2024-07-23 18:22:10.134582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.723 [2024-07-23 18:22:10.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.723 [2024-07-23 18:22:10.134636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.723 [2024-07-23 18:22:10.141062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.141098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.147519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.147572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.153996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.154032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.154058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.160521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.160555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.160573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.167034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.167082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.167099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.173680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.173728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.173745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.180180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.180247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.186562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.186598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.186631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.192939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.192973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.192991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.199413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.199448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.199467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.205792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.205826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.205844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.212104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.212138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.212156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.218628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.218661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.218679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.225054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.225088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.225105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.231364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.231429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.237817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.237866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.237884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.244435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.244484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.244501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.251117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.251157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.251192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.257598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.257652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.263938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.263989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.264021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.270296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.270356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.270375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.276683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.276716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.283083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.283133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.289422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.289492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.295969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.296017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.296036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.302593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.302642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.302660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.724 [2024-07-23 18:22:10.309055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.724 [2024-07-23 18:22:10.309088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.724 [2024-07-23 18:22:10.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.315638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.315671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.315704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.322169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.322208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.322227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.328528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.328562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.328581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.334899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.334932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.334950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.341231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.341269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.341304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.347641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.347674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.347692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.353995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.354029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.360232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.360265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.360283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.366589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.366638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.366656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.373004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.373037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.373070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.725 [2024-07-23 18:22:10.379393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.725 [2024-07-23 18:22:10.379428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.725 [2024-07-23 18:22:10.379446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.385913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.385947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.385964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.392339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.392373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.392392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.398695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.398730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.398747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.404988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.405021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.411464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.411498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.417889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.417924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.417942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.424381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.424439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.424457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.430738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.430772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.430797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.437146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.437179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.437211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.443649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.443683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.443716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.450064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.450098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.450130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.456671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.984 [2024-07-23 18:22:10.456720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.984 [2024-07-23 18:22:10.456737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.984 [2024-07-23 18:22:10.462962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.462996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.463013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.469256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.469290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.469333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.475686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.475719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.475737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.482639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.482688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.482705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.490473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.490515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.490549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.498380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.498414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.498433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.505750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.505785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.505803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.512400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.512434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.518802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.518835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.518853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.525133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.525167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.525184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.531438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.531473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.531491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.537844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.537878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.537895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.544224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.544257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.550669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.550704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.556943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.556977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.556995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.563445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.563481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.563499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.569829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.569863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.569881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.576100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.576135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.576153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.582451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.582486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.582504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.588830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.588863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.588881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.595088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.595122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.595139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.601418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.601468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.601493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.607876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.607911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.607931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.614188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.614233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.614250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.620594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.620629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.620647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.626976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.627028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.985 [2024-07-23 18:22:10.633410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.985 [2024-07-23 18:22:10.633443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.985 [2024-07-23 18:22:10.633462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.986 [2024-07-23 18:22:10.639857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:02.986 [2024-07-23 18:22:10.639890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.986 [2024-07-23 18:22:10.639908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.646357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.646413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.646433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.653056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.653104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.653121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.657567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.657609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.657652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.662947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.662980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.662997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.669368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.669402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.669419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.675645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.675693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.675710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.682006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.682053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.682070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.688564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.688623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.688640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.695076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.695126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.701533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.701574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.701591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.708041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.708082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.708110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.714661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.714693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.714725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.721042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.721073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.721089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.727686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.727735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.734236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.734269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.734285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.740726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.740759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.740775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.747069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.747119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.753490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.753524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.753542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.760047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.760082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.245 [2024-07-23 18:22:10.760116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.245 [2024-07-23 18:22:10.766559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.245 [2024-07-23 18:22:10.766600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.766619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.773121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.773154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.773171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.779563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.779597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.785857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.785890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.785907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.792000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.792048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.792064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.798534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.798584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.805335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.805369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.811518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.811551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.811569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.817954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.817986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.818004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.824238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.824270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.824287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.830661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.830712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.836799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.836832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.836850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.843092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.843124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.843142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.849651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.849684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.849701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.855942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.855976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.856009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.862289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.862327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.862361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.868703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.868736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.868752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.875241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.875273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.875297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.881641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.881689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.881707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.888000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.888034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.888066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.894372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.894405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.894423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.246 [2024-07-23 18:22:10.900752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.246 [2024-07-23 18:22:10.900785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.246 [2024-07-23 18:22:10.900803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.907196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.907243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.907260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.913872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.913904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.913921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.920424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.920458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.920477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.926751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.926783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.926800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.933266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.933305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.933347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.939645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.939677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.939694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.945842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.945875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.945892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.952266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.952335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.952366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.958203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.958237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.958253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.964397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.964450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.970705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.970755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.970773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.977010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.977059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.977076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.983545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.983578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.983596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.990099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.990132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.990149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:10.996564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:10.996597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:10.996629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.003136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.003170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.003201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.009646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.009679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.009696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.016219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.016254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.022765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.022800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.022818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.029188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.029222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.029239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.035473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.035508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.035526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.041736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.041769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.041794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.048159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.048192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.048210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.054534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.054569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.506 [2024-07-23 18:22:11.060775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.506 [2024-07-23 18:22:11.060823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.506 [2024-07-23 18:22:11.060841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.067193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.067242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.067259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.073643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.073691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.073708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.080062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.080096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.080113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.086513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.086548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.086566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.092929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.092963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.092980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.099229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.099262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.099280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.105506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.105540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.105558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.111864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.111897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.111915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.118145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.118179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.124531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.124564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.124582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.130963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.130995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.131014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.137429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.137463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.137495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.143954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.144004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.150428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.150463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.150487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.156898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.156931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.156949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.507 [2024-07-23 18:22:11.163366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.507 [2024-07-23 18:22:11.163404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.507 [2024-07-23 18:22:11.163422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.766 [2024-07-23 18:22:11.169840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.766 [2024-07-23 18:22:11.169874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.766 [2024-07-23 18:22:11.169891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.766 [2024-07-23 18:22:11.176158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.176193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.176211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.182507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.182541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.182573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.188889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.188923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.188940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.195405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.195446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.195475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.200248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.200294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.200311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.208623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.208665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.208698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.216496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.216528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.225100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.225132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.225150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.233224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.233279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.233339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.241129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.241178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.241204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.249044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.249092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.249109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.257289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.257380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.265429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.265463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.265482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.273486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.273520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.273539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.281556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.281597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.281637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.289480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.289515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.289534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.293890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.293922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.302010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.302056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.302074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.309545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.309578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.316213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.316246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.316263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.323486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.323535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.323553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.330460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.330493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.330510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.336996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.337028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.337052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.343437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.343471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.343489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.349711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.349758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.356037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.356069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.356101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.362539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.767 [2024-07-23 18:22:11.362571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.767 [2024-07-23 18:22:11.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.767 [2024-07-23 18:22:11.369034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.369067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.369085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.375430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.375479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.375497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.381725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.381773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.381790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.388049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.388096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.394188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.394227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.394245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.400474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.400523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.400541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.406787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.406837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.406854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.413133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.413167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.413184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.768 [2024-07-23 18:22:11.419450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:03.768 [2024-07-23 18:22:11.419485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.768 [2024-07-23 18:22:11.419503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.425939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.425973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.425991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.432563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.432611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.432629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.438966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.439016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.445404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.445437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.445469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.451957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.451990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.452007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.458262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.458310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.458339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.464524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.464558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.464577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.470906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.470939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.470956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.477222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.477256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.477274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.483658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.483690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.483708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.490005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.490040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.490073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.496376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.496426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.502819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.502866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.502889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.509232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.509264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.509280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.515670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.515718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.027 [2024-07-23 18:22:11.515735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.027 [2024-07-23 18:22:11.522195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.027 [2024-07-23 18:22:11.522228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.528636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.528668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.528701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.535000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.535047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.535064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.541429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.541461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.541478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.547890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.547922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.547940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.554457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.554490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.554508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.560829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.560875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.560892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.567534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.567566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.567583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.573818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.573866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.573883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.580259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.580292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.580334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.586603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.586676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.593169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.593203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.593221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.599585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.599618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.599650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.605940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.605987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.606004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.612314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.612369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.612391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.618805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.618840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.618857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.625246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.625279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.625297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.631604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.638117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.638149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.638166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.644490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.644539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.644557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.650942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.650974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.650992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.657415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.657447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.663811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.663857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.663875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.670182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.670234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.670251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.676496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.676545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.676563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.028 [2024-07-23 18:22:11.683063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.028 [2024-07-23 18:22:11.683096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.028 [2024-07-23 18:22:11.683114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.689695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.689729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.689762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.696193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.696225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.696242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.702739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.709074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.709107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.709140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.715551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.715598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.715615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.721926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.721958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.721974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.728335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.728369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.728402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.734740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.734773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.734804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.741137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.741171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.741188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.747475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.747525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.747543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.753884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.753930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.760315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.760358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.760376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.767706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.767753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.767770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.775943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.775975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.784038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.784085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.784107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.791611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.791652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.791677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.800029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.800078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.800096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.808133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.808179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.808197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.816342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.816391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.816409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.824674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.824707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.824738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.832965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.832997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.833013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.841261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.841294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.841311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.288 [2024-07-23 18:22:11.849350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.288 [2024-07-23 18:22:11.849384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.288 [2024-07-23 18:22:11.849402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.854024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.854062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.854081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.862100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.862132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.862149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.870461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.870510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.878180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.878220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.885711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.885756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.885772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.894104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.894136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.894153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.902169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.902219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.909515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.909547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.909564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.916215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.916264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.923866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.923911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.923927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.930602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.930635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.930667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.937527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.937560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.937577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.289 [2024-07-23 18:22:11.944289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.289 [2024-07-23 18:22:11.944346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.289 [2024-07-23 18:22:11.944367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.951462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.951495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.951512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.958026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.958073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.964499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.964532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.964550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.971072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.971118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.971135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.978024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.978071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.978092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.547 [2024-07-23 18:22:11.984998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2106130) 00:34:04.547 [2024-07-23 18:22:11.985031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.547 [2024-07-23 18:22:11.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.547 00:34:04.547 Latency(us) 00:34:04.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.547 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:04.547 nvme0n1 : 2.00 4679.20 584.90 0.00 0.00 3414.12 837.40 11990.66 00:34:04.547 =================================================================================================================== 00:34:04.547 Total : 4679.20 584.90 0.00 0.00 3414.12 837.40 11990.66 00:34:04.547 0 00:34:04.547 18:22:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:04.547 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:04.547 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:04.547 | .driver_specific 00:34:04.547 | .nvme_error 00:34:04.547 | .status_code 00:34:04.547 | .command_transient_transport_error' 00:34:04.547 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 302 > 0 )) 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2497379 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2497379 ']' 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2497379 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2497379 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2497379' 00:34:04.805 killing process with pid 2497379 00:34:04.805 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2497379 00:34:04.806 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.806 00:34:04.806 Latency(us) 00:34:04.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.806 =================================================================================================================== 00:34:04.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.806 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2497379 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2497783 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2497783 /var/tmp/bperf.sock 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2497783 ']' 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.064 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.064 [2024-07-23 18:22:12.550485] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:05.064 [2024-07-23 18:22:12.550585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497783 ] 00:34:05.064 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.064 [2024-07-23 18:22:12.609173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.064 [2024-07-23 18:22:12.693069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.322 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.322 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:05.322 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:05.322 18:22:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.580 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:05.839 nvme0n1 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:05.839 18:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.098 Running I/O for 2 seconds... 00:34:06.098 [2024-07-23 18:22:13.523079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ee5c8 00:34:06.098 [2024-07-23 18:22:13.523961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.524012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.535636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fb480 00:34:06.098 [2024-07-23 18:22:13.536612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.536664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.546652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e2c28 00:34:06.098 [2024-07-23 18:22:13.547633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.547676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.558949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e0630 00:34:06.098 [2024-07-23 18:22:13.560078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.560121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.571353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e7c50 00:34:06.098 [2024-07-23 18:22:13.572651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.572679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.583584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190feb58 00:34:06.098 [2024-07-23 18:22:13.584994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.585036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.595754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f0bc0 00:34:06.098 [2024-07-23 18:22:13.597285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.597335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.607788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e1f80 00:34:06.098 [2024-07-23 18:22:13.609505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.609554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.616042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190df550 00:34:06.098 [2024-07-23 18:22:13.616752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.616793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.627013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f2d80 00:34:06.098 [2024-07-23 18:22:13.627798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.627840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.640109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f0350 00:34:06.098 [2024-07-23 18:22:13.641032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.641077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.652170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fb8b8 00:34:06.098 [2024-07-23 18:22:13.653185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.653227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.664313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190de038 00:34:06.098 [2024-07-23 18:22:13.665484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.665526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.676487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5658 00:34:06.098 [2024-07-23 18:22:13.677787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.677829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.685013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190df988 00:34:06.098 [2024-07-23 18:22:13.685689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.685731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.697308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f2d80 00:34:06.098 [2024-07-23 18:22:13.698164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.698207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.709696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190df118 00:34:06.098 [2024-07-23 18:22:13.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.710732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.721864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fc560 00:34:06.098 [2024-07-23 18:22:13.722968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.098 [2024-07-23 18:22:13.723009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.098 [2024-07-23 18:22:13.734015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f4298 00:34:06.099 [2024-07-23 18:22:13.735221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-23 18:22:13.735264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.099 [2024-07-23 18:22:13.746168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e6738 00:34:06.099 [2024-07-23 18:22:13.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.099 [2024-07-23 18:22:13.747686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.758762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f0788 00:34:06.356 [2024-07-23 18:22:13.760570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.760614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.771163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f8a50 00:34:06.356 [2024-07-23 18:22:13.772886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.772928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.779382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f7538 00:34:06.356 [2024-07-23 18:22:13.780058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.780098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.790474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ecc78 00:34:06.356 [2024-07-23 18:22:13.791136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.791178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.802636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190eb328 00:34:06.356 [2024-07-23 18:22:13.803444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.803487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.814704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e6300 00:34:06.356 [2024-07-23 18:22:13.815658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.815701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.826938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e88f8 00:34:06.356 [2024-07-23 18:22:13.828075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.828117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.839160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ef6a8 00:34:06.356 [2024-07-23 18:22:13.840441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.840482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.851268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f0bc0 00:34:06.356 [2024-07-23 18:22:13.852785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.852827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.863677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190edd58 00:34:06.356 [2024-07-23 18:22:13.865214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.865256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.875727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f8e88 00:34:06.356 [2024-07-23 18:22:13.877434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.877477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.883964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ec408 00:34:06.356 [2024-07-23 18:22:13.884672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.884700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.897109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ee190 00:34:06.356 [2024-07-23 18:22:13.898381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.898409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.909271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e73e0 00:34:06.356 [2024-07-23 18:22:13.910729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.356 [2024-07-23 18:22:13.910778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.356 [2024-07-23 18:22:13.921696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fa3a0 00:34:06.357 [2024-07-23 18:22:13.923213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.923255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.933777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e1b48 00:34:06.357 [2024-07-23 18:22:13.935466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.935509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.941977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190eb760 00:34:06.357 [2024-07-23 18:22:13.942718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.942761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.955173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f4298 00:34:06.357 [2024-07-23 18:22:13.956458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.956502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.965949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190f81e0 00:34:06.357 [2024-07-23 18:22:13.966793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.977655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fd208 00:34:06.357 [2024-07-23 18:22:13.978394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.978439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:13.989718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190fef90 00:34:06.357 [2024-07-23 18:22:13.990651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:13.990693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:14.001785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5220 00:34:06.357 [2024-07-23 18:22:14.002891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:14.002933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.357 [2024-07-23 18:22:14.012950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190ef6a8 00:34:06.357 [2024-07-23 18:22:14.014749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.357 [2024-07-23 18:22:14.014777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.026165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.026389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.039665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.039966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.040008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.053227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.053471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.053515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.066780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.067027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.080626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.080798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.080840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.094381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.094659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.107929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.108135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.108177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.121128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.121368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.121397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.134625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.134872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.134913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.148033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.148297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.148345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.161682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.161954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.161995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.174933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.175202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.175228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.615 [2024-07-23 18:22:14.188468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.615 [2024-07-23 18:22:14.188745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.615 [2024-07-23 18:22:14.188786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.202021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.202232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.202272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.215642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.215860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.215904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.229089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.229299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.229333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.242615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.242904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.242952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.256146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.256362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.256388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.616 [2024-07-23 18:22:14.269917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.616 [2024-07-23 18:22:14.270123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.616 [2024-07-23 18:22:14.270163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.284055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.284262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.297638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.297846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.297896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.311107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.311360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.324417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.324575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.324600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.337687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.337845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.337870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.350797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.351004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.351029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.363957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.364181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.364218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.377270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.377472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.377496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.390309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.874 [2024-07-23 18:22:14.390608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.874 [2024-07-23 18:22:14.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.874 [2024-07-23 18:22:14.403883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.404118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.404145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.417132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.417343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.417368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.430539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.430761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.430787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.443848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.444055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.444105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.457062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.457285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.457332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.470661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.470934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.470975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.484135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.484378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.484404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.497785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.498001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.498044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.511218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.511454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.511482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.875 [2024-07-23 18:22:14.524747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:06.875 [2024-07-23 18:22:14.524973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.875 [2024-07-23 18:22:14.524999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.538720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.133 [2024-07-23 18:22:14.538929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-07-23 18:22:14.538972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.552365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.133 [2024-07-23 18:22:14.552577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-07-23 18:22:14.552618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.566107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.133 [2024-07-23 18:22:14.566339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-07-23 18:22:14.566364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.579620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.133 [2024-07-23 18:22:14.579825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-07-23 18:22:14.579868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.593665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.133 [2024-07-23 18:22:14.593879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-07-23 18:22:14.593928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.133 [2024-07-23 18:22:14.607527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.607706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.607748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.621538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.621841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.621883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.635543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.635819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.635848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.649412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.649685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.649727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.663274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.663512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.663555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.677306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.677585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.677632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.690872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.691076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.691119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.704873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.705087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.705129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.718584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.718864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.718898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.732416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.732635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.732676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.746237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.746472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.746516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.760089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.760339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.773936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.774193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.774236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.134 [2024-07-23 18:22:14.787798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.134 [2024-07-23 18:22:14.788011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.134 [2024-07-23 18:22:14.788053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.392 [2024-07-23 18:22:14.802174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.802412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.802454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.816047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.816274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.816300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.830126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.830338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.830380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.844090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.844308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.844356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.858107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.858326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.858368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.872293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.872518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.872547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.886170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.886405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.886446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.900188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.900421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.900465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.914240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.914479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.914522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.928315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.928540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.928568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.942454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.942691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.942717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.956371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.956587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.956630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.970134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.970346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.970385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.984239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.984476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.984518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:14.998155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:14.998428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:14.998472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:15.012022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:15.012272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:15.012315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:15.025734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:15.025999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:15.026042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.393 [2024-07-23 18:22:15.039602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.393 [2024-07-23 18:22:15.039832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.393 [2024-07-23 18:22:15.039860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.053725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.053984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.054028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.067521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.067747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.067789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.081561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.081777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.081827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.095492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.095696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.095724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.109231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.109523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.109567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.123181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.123433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.123475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.136943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.137261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.150835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.151059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.151101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.164584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.164802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.178310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.178587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.178631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.192143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.192372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.192398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.205837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.206050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.206093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.219589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.219804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.219844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.233123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.233314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.233350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.246136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.246354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.246382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.259907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.260128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.260170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.273905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.274134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.274161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.287744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.287954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.287995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.653 [2024-07-23 18:22:15.301631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.653 [2024-07-23 18:22:15.301910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.653 [2024-07-23 18:22:15.301953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.315872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.316169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.316213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.329806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.330018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.330060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.343878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.344152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.344195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.357790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.357996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.358023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.371638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.371919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.371961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.385434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.385673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.385700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.399139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.399418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.399460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.413131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.413341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.413368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.426961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.427212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.427255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.440749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.440984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.441037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.454573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.454857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.454898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.468373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.468604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.468632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.912 [2024-07-23 18:22:15.482219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.912 [2024-07-23 18:22:15.482493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.912 [2024-07-23 18:22:15.482523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.913 [2024-07-23 18:22:15.495934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.913 [2024-07-23 18:22:15.496190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.913 [2024-07-23 18:22:15.496217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.913 [2024-07-23 18:22:15.509749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x954cc0) with pdu=0x2000190e5a90 00:34:07.913 [2024-07-23 18:22:15.509977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.913 [2024-07-23 18:22:15.510018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:07.913 00:34:07.913 Latency(us) 00:34:07.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.913 nvme0n1 : 2.01 19397.11 75.77 0.00 0.00 6583.14 3228.25 14272.28 00:34:07.913 =================================================================================================================== 00:34:07.913 Total : 19397.11 75.77 0.00 0.00 6583.14 3228.25 14272.28 00:34:07.913 0 00:34:07.913 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:07.913 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:07.913 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:07.913 | .driver_specific 00:34:07.913 | .nvme_error 00:34:07.913 | .status_code 00:34:07.913 | .command_transient_transport_error' 00:34:07.913 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2497783 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2497783 ']' 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2497783 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2497783 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2497783' 00:34:08.171 killing process with pid 2497783 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2497783 00:34:08.171 Received shutdown signal, test time was about 2.000000 seconds 00:34:08.171 00:34:08.171 Latency(us) 00:34:08.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.171 =================================================================================================================== 00:34:08.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.171 18:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2497783 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:08.428 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2498192 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2498192 /var/tmp/bperf.sock 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2498192 ']' 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:08.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:08.429 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.429 [2024-07-23 18:22:16.063192] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:08.429 [2024-07-23 18:22:16.063284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498192 ] 00:34:08.429 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:08.429 Zero copy mechanism will not be used. 00:34:08.687 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.687 [2024-07-23 18:22:16.121966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.687 [2024-07-23 18:22:16.202607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.687 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.687 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:08.687 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:08.687 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:08.944 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:08.944 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.944 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.944 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.945 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.945 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.238 nvme0n1 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:09.497 18:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:09.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:09.497 Zero copy mechanism will not be used. 00:34:09.497 Running I/O for 2 seconds... 00:34:09.497 [2024-07-23 18:22:17.008662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.009032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.009069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.014156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.014550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.020561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.020980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.021035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.026096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.026419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.026447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.031258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.031552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.031580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.036426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.036744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.036771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.041558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.041850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.041877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.046592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.046913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.046942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.051621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.051987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.052015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.056866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.057262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.057311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.062082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.062462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.062500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.068156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.068488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.068518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.073892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.074206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.074234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.079534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.079834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.079862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.084582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.084915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.084944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.089750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.090049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.090077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.095007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.095380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.101358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.101673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.101700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.106847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.107159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.497 [2024-07-23 18:22:17.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.497 [2024-07-23 18:22:17.112328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.497 [2024-07-23 18:22:17.112704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.112732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.117716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.118081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.118115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.123035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.123331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.123359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.128689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.129000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.129027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.134509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.134821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.134848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.139688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.139997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.140024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.144736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.145025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.145068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.149846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.150172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.498 [2024-07-23 18:22:17.155384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.498 [2024-07-23 18:22:17.155819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.498 [2024-07-23 18:22:17.155866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.161927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.162300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.167241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.167625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.167655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.172558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.172962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.172996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.177840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.178144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.183447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.183806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.189234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.189559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.194514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.194864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.194893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.199639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.199983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.200012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.204810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.205122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.205150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.209956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.210237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.215294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.215610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.215639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.221145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.221511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.227995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.228346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.228389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.235309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.235703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.235741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.242625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.243013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.249793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.250220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.250265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.257192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.257503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.257548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.264159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.264486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.757 [2024-07-23 18:22:17.264515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.757 [2024-07-23 18:22:17.270428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.757 [2024-07-23 18:22:17.270720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.270754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.275466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.275785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.275814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.280336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.280640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.280668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.285998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.286448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.286476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.292014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.292368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.292397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.297941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.298263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.298292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.302798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.303158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.303186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.307910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.308234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.308261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.313129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.313448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.313476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.318278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.318593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.318622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.323218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.323555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.323584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.328539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.328856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.328884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.334475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.334816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.334843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.339763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.340061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.340088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.344713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.345049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.349865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.350219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.350246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.355281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.355636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.355664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.360480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.360838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.360866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.366460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.366810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.366840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.372247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.372545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.372575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.377281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.377602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.377632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.382617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.382956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.388217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.388515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.388543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.393299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.393608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.393636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.398280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.398591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.398620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.403246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.403539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.408409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.408714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.758 [2024-07-23 18:22:17.408747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.758 [2024-07-23 18:22:17.413555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:09.758 [2024-07-23 18:22:17.413851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.759 [2024-07-23 18:22:17.413880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.418834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.419181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.419209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.424295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.424598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.424642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.429335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.429637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.429665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.434530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.434830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.434858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.439620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.439922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.439951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.444716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.445012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.445039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.449837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.450131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.450159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.454859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.455143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.455170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.459921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.460266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.460295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.465205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.465515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.465543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.471303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.471735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.476834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.477191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.482047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.482455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.482484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.487201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.487519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.492198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.492570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.492599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.498231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.498542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.498576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.503664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.503744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.509845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.510129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.510158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.516462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.516784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.516814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.523676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.524009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.524039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.531232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.531589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.537251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.537561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.537590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.542412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.542715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.018 [2024-07-23 18:22:17.542742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.018 [2024-07-23 18:22:17.547653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.018 [2024-07-23 18:22:17.547944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.547972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.553822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.554187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.554214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.559787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.560147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.560174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.565875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.566210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.566238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.572101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.572422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.572451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.578220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.578526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.578555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.583520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.583817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.583845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.588750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.589039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.589066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.593873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.594209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.594236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.599056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.599393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.599422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.604069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.604410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.604439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.609140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.609447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.609475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.614742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.615043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.615070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.620825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.621118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.621147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.626770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.627159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.627192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.632243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.632626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.637338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.637713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.642527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.642836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.642863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.647541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.647840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.647873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.652732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.653031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.653059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.659365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.659690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.659731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.666224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.666544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.666574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.019 [2024-07-23 18:22:17.674055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.019 [2024-07-23 18:22:17.674411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.019 [2024-07-23 18:22:17.674441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.681822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.682132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.682160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.689139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.689342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.689371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.696034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.696349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.696377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.703060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.703408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.703436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.709910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.710294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.716890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.278 [2024-07-23 18:22:17.717242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.278 [2024-07-23 18:22:17.717269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.278 [2024-07-23 18:22:17.724814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.725132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.725173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.732414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.732718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.732746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.739991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.740349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.740378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.747599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.748026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.755136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.755482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.755511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.761775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.762065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.762095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.767132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.767445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.767474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.772285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.772616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.772645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.777290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.777607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.777635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.782338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.782645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.782673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.787397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.787706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.787733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.792785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.793085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.793111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.798795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.799109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.799136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.804067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.804423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.809217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.809622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.809665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.814609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.814974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.815006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.819853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.820177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.826111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.826435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.826463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.832326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.832657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.832685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.838279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.838640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.838669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.844460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.844817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.850592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.850916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.856759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.857069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.857096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.862820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.863280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.863332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.868943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.869243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.869285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.874059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.874390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.279 [2024-07-23 18:22:17.879101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.279 [2024-07-23 18:22:17.879429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.279 [2024-07-23 18:22:17.879457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.884244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.884578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.884606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.889368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.889685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.889712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.894421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.894710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.894739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.899349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.899701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.899729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.904429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.904747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.904774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.909365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.909665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.909693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.914292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.914591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.919329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.919711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.919763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.924540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.924890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.924918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.929982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.930336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.930365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.280 [2024-07-23 18:22:17.936849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.280 [2024-07-23 18:22:17.937209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.280 [2024-07-23 18:22:17.937238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.942306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.942641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.942668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.947533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.947842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.947870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.952760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.953057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.953085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.958750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.959134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.965550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.965859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.965887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.972217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.972562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.972591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.978523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.978849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.978878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.983727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.984023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.984052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.988857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.989139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.989168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.993871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.994223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.994253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:17.999005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:17.999283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:17.999338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:18.003941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:18.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:18.004270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:18.009293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:18.009704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.539 [2024-07-23 18:22:18.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.539 [2024-07-23 18:22:18.015293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.539 [2024-07-23 18:22:18.015609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.015637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.020395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.020713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.025652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.025941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.025970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.030812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.031108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.031137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.035597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.035894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.035922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.040990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.041270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.041311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.046532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.046854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.046882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.051489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.051816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.056398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.056694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.056722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.061435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.061749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.061778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.066199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.066464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.066494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.070870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.071154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.071184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.075502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.075803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.080090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.080386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.080416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.084897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.085213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.089682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.089954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.094354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.094628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.094661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.099092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.099398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.099427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.103856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.104141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.104170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.108500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.108770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.108799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.113089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.113383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.117747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.118009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.118036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.122429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.122688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.122718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.127091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.127383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.132469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.132757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.132786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.137608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.137873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.137902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.142347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.142609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.540 [2024-07-23 18:22:18.142639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.540 [2024-07-23 18:22:18.147104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.540 [2024-07-23 18:22:18.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.147428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.152020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.152308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.152346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.157346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.157579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.157608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.162806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.163048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.163076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.168237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.168476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.168505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.173846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.174087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.174116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.179397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.179642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.179670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.184781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.185009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.185038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.190414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.190660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.190703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.541 [2024-07-23 18:22:18.195972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.541 [2024-07-23 18:22:18.196214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.541 [2024-07-23 18:22:18.196244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.201594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.201841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.201870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.207262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.207495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.207524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.212629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.212873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.212901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.217451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.217710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.217738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.221964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.222210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.222238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.226684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.226930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.226964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.231348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.231593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.231636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.236004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.236256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.240681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.240920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.240947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.245215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.245452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.245481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.249666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.249908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.800 [2024-07-23 18:22:18.249936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.800 [2024-07-23 18:22:18.254073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.800 [2024-07-23 18:22:18.254298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.254334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.258260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.258466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.258493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.263019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.263225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.263252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.267526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.267764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.267791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.271744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.271946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.271972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.275882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.276086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.276113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.280456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.280661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.280688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.285878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.286172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.291213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.291484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.291513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.296831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.297067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.297096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.302505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.302708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.302736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.307550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.307783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.307812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.311920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.312149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.312179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.316207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.316443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.316470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.320506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.320753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.320783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.325533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.325793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.325823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.330982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.331212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.331241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.335724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.335929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.335957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.339832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.340049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.344350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.344563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.344592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.349736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.349975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.350009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.354109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.354351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.354385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.358312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.358533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.358561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.362638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.362838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.362866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.366839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.367068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.367097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.371119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.371328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.371356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.376478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.801 [2024-07-23 18:22:18.376721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.801 [2024-07-23 18:22:18.376750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.801 [2024-07-23 18:22:18.381062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.381285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.381313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.386147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.386424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.386452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.391417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.391652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.391680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.397259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.397495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.397525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.403506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.403843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.409001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.409275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.409303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.414521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.414750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.414793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.420211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.420488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.420517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.425735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.426043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.426072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.431098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.431455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.436600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.436838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.436871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.442050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.442308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.442344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.447772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.448016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.448044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.453354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:10.802 [2024-07-23 18:22:18.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.802 [2024-07-23 18:22:18.453633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.802 [2024-07-23 18:22:18.458874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.459197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.459226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.464425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.464654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.469967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.470251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.470280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.475129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.475360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.475389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.480162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.480430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.480469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.485554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.485901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.485930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.492089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.492383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.492411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.497253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.497501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.497531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.501613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.501817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.501844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.506219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.506501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.506531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.512219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.512459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.512487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.516511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.516730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.516758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.520693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.520897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.520924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.524991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.529830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.530071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.530100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.535177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.535456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.061 [2024-07-23 18:22:18.535484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.061 [2024-07-23 18:22:18.541104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.061 [2024-07-23 18:22:18.541346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.541382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.546871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.547204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.547247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.552465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.552799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.552827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.558181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.558461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.558490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.563784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.564099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.569417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.569676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.569705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.574993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.575237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.575271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.580675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.580940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.580969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.586153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.586448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.586477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.591617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.591899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.591930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.597158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.597478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.597507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.602594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.602834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.602864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.608345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.608657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.614029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.614253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.614296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.619783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.620122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.620151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.625707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.625963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.631331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.631584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.631613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.635812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.636011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.636040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.640525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.640762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.640791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.645382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.645638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.650205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.650457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.650486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.654818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.655072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.655101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.659105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.663838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.664143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.669053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.669308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.669345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.674298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.674600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.674628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.679910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.680183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.062 [2024-07-23 18:22:18.680212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.062 [2024-07-23 18:22:18.686311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.062 [2024-07-23 18:22:18.686624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.686667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.692527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.692796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.697921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.698132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.698162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.702066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.702266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.702294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.706217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.706425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.706453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.710347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.710553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.710593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.714413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.714622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.714650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.063 [2024-07-23 18:22:18.718782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.063 [2024-07-23 18:22:18.718999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.063 [2024-07-23 18:22:18.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.723050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.723281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.723309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.727380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.727583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.727611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.731600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.735780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.735980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.736009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.739852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.740051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.740079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.744207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.744421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.744450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.749273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.749509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.749537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.753435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.322 [2024-07-23 18:22:18.753640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.322 [2024-07-23 18:22:18.753679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.322 [2024-07-23 18:22:18.757586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.757791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.757819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.761708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.761910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.765893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.766110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.766138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.770048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.770251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.774186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.774396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.774425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.778240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.778454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.778482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.782580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.782789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.782816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.786653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.786898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.790833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.791035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.795030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.795231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.795258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.799177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.799387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.799414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.803252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.803486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.807424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.807641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.807668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.811604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.811806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.811833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.815821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.816065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.820032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.820232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.820283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.824174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.824382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.824410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.828224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.828433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.828461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.832334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.832535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.832562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.836496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.836743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.840646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.840847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.840874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.844786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.845002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.845046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.848927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.849129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.849156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.853104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.853340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.857179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.857424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.857463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.861395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.861598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.861626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.323 [2024-07-23 18:22:18.865517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.323 [2024-07-23 18:22:18.865716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.323 [2024-07-23 18:22:18.865743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.869740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.869971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.873913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.874128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.874155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.878052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.878268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.878295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.882293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.882507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.882534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.886521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.886723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.886750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.890693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.890923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.894864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.895068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.899128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.899338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.899365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.903203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.903412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.903439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.907313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.907642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.907669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.911587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.911792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.911819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.915810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.916011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.916038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.919952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.920154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.920180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.924066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.924267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.928278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.928489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.928525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.932433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.932634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.932661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.936612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.936826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.936852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.940694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.940893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.940920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.944824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.945020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.945048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.949524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.949741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.949770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.954942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.955256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.955284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.960240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.960526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.966477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.966754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.966783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.972242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.972549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.972579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.324 [2024-07-23 18:22:18.978731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.324 [2024-07-23 18:22:18.978954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.324 [2024-07-23 18:22:18.978984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.583 [2024-07-23 18:22:18.985114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.583 [2024-07-23 18:22:18.985443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.583 [2024-07-23 18:22:18.985473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.583 [2024-07-23 18:22:18.991023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.583 [2024-07-23 18:22:18.991233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.583 [2024-07-23 18:22:18.991261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:11.583 [2024-07-23 18:22:18.995735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.583 [2024-07-23 18:22:18.995940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.583 [2024-07-23 18:22:18.995969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.583 [2024-07-23 18:22:18.999982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.583 [2024-07-23 18:22:19.000188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.583 [2024-07-23 18:22:19.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:11.583 [2024-07-23 18:22:19.004085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x956940) with pdu=0x2000190fef90 00:34:11.583 [2024-07-23 18:22:19.004288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.583 [2024-07-23 18:22:19.004315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:11.583 00:34:11.583 Latency(us) 00:34:11.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.583 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:11.583 nvme0n1 : 2.00 5929.13 741.14 0.00 0.00 2691.77 1844.72 8058.50 00:34:11.583 =================================================================================================================== 00:34:11.583 Total : 5929.13 741.14 0.00 0.00 2691.77 1844.72 8058.50 00:34:11.583 0 00:34:11.583 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:11.583 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:11.583 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:11.583 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:11.583 | .driver_specific 00:34:11.583 | .nvme_error 00:34:11.583 | .status_code 00:34:11.583 | .command_transient_transport_error' 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2498192 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2498192 ']' 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2498192 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2498192 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2498192' 00:34:11.841 killing process with pid 2498192 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2498192 00:34:11.841 Received shutdown signal, test time was about 2.000000 seconds 00:34:11.841 00:34:11.841 Latency(us) 00:34:11.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.841 =================================================================================================================== 00:34:11.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.841 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2498192 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2496834 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2496834 ']' 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2496834 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2496834 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2496834' 00:34:12.099 killing process with pid 2496834 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2496834 00:34:12.099 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2496834 00:34:12.357 00:34:12.357 real 0m14.983s 00:34:12.357 user 0m29.684s 00:34:12.357 sys 0m4.211s 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:12.357 ************************************ 00:34:12.357 END TEST nvmf_digest_error 00:34:12.357 ************************************ 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.357 rmmod nvme_tcp 00:34:12.357 rmmod nvme_fabrics 00:34:12.357 rmmod nvme_keyring 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2496834 ']' 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2496834 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2496834 ']' 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2496834 00:34:12.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2496834) - No such process 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2496834 is not found' 00:34:12.357 Process with pid 2496834 is not found 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.357 18:22:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.263 00:34:14.263 real 0m34.806s 00:34:14.263 user 1m1.091s 00:34:14.263 sys 0m9.918s 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.263 ************************************ 00:34:14.263 END TEST nvmf_digest 00:34:14.263 ************************************ 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.263 18:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.522 ************************************ 00:34:14.522 START TEST nvmf_bdevperf 00:34:14.522 ************************************ 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:14.522 * Looking for test storage... 00:34:14.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.522 18:22:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.522 18:22:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.421 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:16.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:16.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:16.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:16.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.422 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:34:16.679 00:34:16.679 --- 10.0.0.2 ping statistics --- 00:34:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.679 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:34:16.679 00:34:16.679 --- 10.0.0.1 ping statistics --- 00:34:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.679 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2500542 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2500542 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2500542 ']' 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.679 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.679 [2024-07-23 18:22:24.260364] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:16.679 [2024-07-23 18:22:24.260445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.679 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.679 [2024-07-23 18:22:24.325412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.936 [2024-07-23 18:22:24.412987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.936 [2024-07-23 18:22:24.413040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.936 [2024-07-23 18:22:24.413063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.936 [2024-07-23 18:22:24.413074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.936 [2024-07-23 18:22:24.413088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.936 [2024-07-23 18:22:24.413143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.936 [2024-07-23 18:22:24.413198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.936 [2024-07-23 18:22:24.413201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.936 [2024-07-23 18:22:24.552217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.936 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 Malloc0 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.194 [2024-07-23 18:22:24.617396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.194 { 00:34:17.194 "params": { 00:34:17.194 "name": "Nvme$subsystem", 00:34:17.194 "trtype": "$TEST_TRANSPORT", 00:34:17.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.194 "adrfam": "ipv4", 00:34:17.194 "trsvcid": "$NVMF_PORT", 00:34:17.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.194 "hdgst": ${hdgst:-false}, 00:34:17.194 "ddgst": ${ddgst:-false} 00:34:17.194 }, 00:34:17.194 "method": "bdev_nvme_attach_controller" 00:34:17.194 } 00:34:17.194 EOF 00:34:17.194 )") 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:17.194 18:22:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:17.194 "params": { 00:34:17.194 "name": "Nvme1", 00:34:17.194 "trtype": "tcp", 00:34:17.194 "traddr": "10.0.0.2", 00:34:17.195 "adrfam": "ipv4", 00:34:17.195 "trsvcid": "4420", 00:34:17.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.195 "hdgst": false, 00:34:17.195 "ddgst": false 00:34:17.195 }, 00:34:17.195 "method": "bdev_nvme_attach_controller" 00:34:17.195 }' 00:34:17.195 [2024-07-23 18:22:24.666731] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:17.195 [2024-07-23 18:22:24.666811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500683 ] 00:34:17.195 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.195 [2024-07-23 18:22:24.726798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.195 [2024-07-23 18:22:24.816876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.452 Running I/O for 1 seconds... 00:34:18.381 00:34:18.381 Latency(us) 00:34:18.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.381 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:18.381 Verification LBA range: start 0x0 length 0x4000 00:34:18.381 Nvme1n1 : 1.02 8830.30 34.49 0.00 0.00 14434.61 3179.71 15437.37 00:34:18.381 =================================================================================================================== 00:34:18.381 Total : 8830.30 34.49 0.00 0.00 14434.61 3179.71 15437.37 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2500826 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:18.639 { 00:34:18.639 "params": { 00:34:18.639 "name": "Nvme$subsystem", 00:34:18.639 "trtype": "$TEST_TRANSPORT", 00:34:18.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.639 "adrfam": "ipv4", 00:34:18.639 "trsvcid": "$NVMF_PORT", 00:34:18.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.639 "hdgst": ${hdgst:-false}, 00:34:18.639 "ddgst": ${ddgst:-false} 00:34:18.639 }, 00:34:18.639 "method": "bdev_nvme_attach_controller" 00:34:18.639 } 00:34:18.639 EOF 00:34:18.639 )") 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:18.639 18:22:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:18.639 "params": { 00:34:18.639 "name": "Nvme1", 00:34:18.639 "trtype": "tcp", 00:34:18.639 "traddr": "10.0.0.2", 00:34:18.639 "adrfam": "ipv4", 00:34:18.639 "trsvcid": "4420", 00:34:18.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.639 "hdgst": false, 00:34:18.639 "ddgst": false 00:34:18.639 }, 00:34:18.639 "method": "bdev_nvme_attach_controller" 00:34:18.639 }' 00:34:18.639 [2024-07-23 18:22:26.269073] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:18.639 [2024-07-23 18:22:26.269164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500826 ] 00:34:18.639 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.895 [2024-07-23 18:22:26.329500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.895 [2024-07-23 18:22:26.412999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.151 Running I/O for 15 seconds... 00:34:21.678 18:22:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2500542 00:34:21.678 18:22:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:21.678 [2024-07-23 18:22:29.239843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.239896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.239926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.239944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.239960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.239975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.240020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.240049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.240097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.240141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.678 [2024-07-23 18:22:29.240170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-07-23 18:22:29.240185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.678 [2024-07-23 18:22:29.240198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.679 [2024-07-23 18:22:29.240429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.240984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.240997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.679 [2024-07-23 18:22:29.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.679 [2024-07-23 18:22:29.241385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.241975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.241987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.680 [2024-07-23 18:22:29.242423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.680 [2024-07-23 18:22:29.242530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.680 [2024-07-23 18:22:29.242545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.242985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.242998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.681 [2024-07-23 18:22:29.243211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.681 [2024-07-23 18:22:29.243676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.681 [2024-07-23 18:22:29.243688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.243701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b390 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.243717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:21.682 [2024-07-23 18:22:29.243728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:21.682 [2024-07-23 18:22:29.243738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50904 len:8 PRP1 0x0 PRP2 0x0 00:34:21.682 [2024-07-23 18:22:29.243754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.243816] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f3b390 was disconnected and freed. reset controller. 00:34:21.682 [2024-07-23 18:22:29.243892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.682 [2024-07-23 18:22:29.243917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.243932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.682 [2024-07-23 18:22:29.243959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.243973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.682 [2024-07-23 18:22:29.243985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.243998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.682 [2024-07-23 18:22:29.244010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.682 [2024-07-23 18:22:29.244022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.247108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.247145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.248020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.248050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.248067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.248303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.248585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.248621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.248638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.251727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.260872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.261294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.261350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.261368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.261625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.261815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.261834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.261847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.264682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.273834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.274226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.274254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.274270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.274518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.274744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.274765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.274777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.277804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.286870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.287256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.287284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.287300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.287565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.287789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.287809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.287820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.290698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.299978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.300398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.300428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.300445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.300678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.300882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.300903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.300915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.303714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.313009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.313428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.313477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.313493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.313722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.313925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.313946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.313959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.316759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.682 [2024-07-23 18:22:29.326065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.682 [2024-07-23 18:22:29.326481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-23 18:22:29.326511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.682 [2024-07-23 18:22:29.326528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.682 [2024-07-23 18:22:29.326769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.682 [2024-07-23 18:22:29.326973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.682 [2024-07-23 18:22:29.326993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.682 [2024-07-23 18:22:29.327006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.682 [2024-07-23 18:22:29.329886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.941 [2024-07-23 18:22:29.339358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.941 [2024-07-23 18:22:29.339692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-23 18:22:29.339735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.941 [2024-07-23 18:22:29.339756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.941 [2024-07-23 18:22:29.339996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.941 [2024-07-23 18:22:29.340216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.941 [2024-07-23 18:22:29.340236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.941 [2024-07-23 18:22:29.340248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.941 [2024-07-23 18:22:29.343126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.941 [2024-07-23 18:22:29.352459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.941 [2024-07-23 18:22:29.352818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-23 18:22:29.352847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.941 [2024-07-23 18:22:29.352863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.941 [2024-07-23 18:22:29.353098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.941 [2024-07-23 18:22:29.353301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.941 [2024-07-23 18:22:29.353347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.941 [2024-07-23 18:22:29.353363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.356254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.365579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.365994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.366022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.366038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.366277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.366501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.366524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.366537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.369425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.378688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.379106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.379134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.379149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.379396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.379596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.379634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.379647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.382558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.391860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.392278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.392324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.392342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.392576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.392780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.392800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.392812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.395613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.404905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.405233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.405260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.405275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.405535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.405742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.405762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.405774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.408646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.418083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.418499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.418527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.418542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.418775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.418979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.418999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.419012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.421887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.431189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.431584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.431612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.431628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.431843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.432046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.432065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.432078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.434993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.444270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.444613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.444641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.444657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.444872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.445076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.445095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.445107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.447983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.457398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.457811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.457840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.457855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.458091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.458294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.458314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.458352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.461204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.470528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.470941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.470970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.470991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.471228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.471464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.471485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.471498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.942 [2024-07-23 18:22:29.474362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.942 [2024-07-23 18:22:29.483810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.942 [2024-07-23 18:22:29.484167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-23 18:22:29.484195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.942 [2024-07-23 18:22:29.484210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.942 [2024-07-23 18:22:29.484467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.942 [2024-07-23 18:22:29.484673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.942 [2024-07-23 18:22:29.484694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.942 [2024-07-23 18:22:29.484706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.487563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.497011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.497366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.497396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.497412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.497649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.497842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.497862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.497875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.501203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.510594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.511011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.511040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.511056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.511264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.511525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.511547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.511581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.514673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.523652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.524012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.524041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.524058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.524292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.524512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.524534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.524547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.527413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.536723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.537201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.537253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.537269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.537523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.537730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.537750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.537762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.540676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.549951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.550371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.550401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.550417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.550660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.550862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.550882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.550894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.553774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.562941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.563302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.563340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.563357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.563592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.563795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.563815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.563827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.566735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.576170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.576553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.576582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.576612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.576845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.577048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.577066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.577078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.579984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.943 [2024-07-23 18:22:29.589274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.943 [2024-07-23 18:22:29.589683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-23 18:22:29.589711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:21.943 [2024-07-23 18:22:29.589726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:21.943 [2024-07-23 18:22:29.589942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:21.943 [2024-07-23 18:22:29.590144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.943 [2024-07-23 18:22:29.590164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.943 [2024-07-23 18:22:29.590176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.943 [2024-07-23 18:22:29.593091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.201 [2024-07-23 18:22:29.602657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.201 [2024-07-23 18:22:29.603078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.201 [2024-07-23 18:22:29.603131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.201 [2024-07-23 18:22:29.603147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.201 [2024-07-23 18:22:29.603408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.201 [2024-07-23 18:22:29.603601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.201 [2024-07-23 18:22:29.603635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.201 [2024-07-23 18:22:29.603649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.201 [2024-07-23 18:22:29.606859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.201 [2024-07-23 18:22:29.615854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.201 [2024-07-23 18:22:29.616236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.201 [2024-07-23 18:22:29.616263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.201 [2024-07-23 18:22:29.616279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.201 [2024-07-23 18:22:29.616541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.201 [2024-07-23 18:22:29.616766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.201 [2024-07-23 18:22:29.616786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.201 [2024-07-23 18:22:29.616798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.201 [2024-07-23 18:22:29.619673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.201 [2024-07-23 18:22:29.628912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.201 [2024-07-23 18:22:29.629304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.201 [2024-07-23 18:22:29.629340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.201 [2024-07-23 18:22:29.629357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.201 [2024-07-23 18:22:29.629573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.201 [2024-07-23 18:22:29.629777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.201 [2024-07-23 18:22:29.629796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.201 [2024-07-23 18:22:29.629809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.632620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.641971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.642384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.642412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.642428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.642665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.642869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.642889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.642906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.645792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.655105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.655541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.655569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.655585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.655825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.656029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.656049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.656061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.658937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.668240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.668660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.668688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.668703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.668934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.669138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.669158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.669169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.672047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.681360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.681715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.681742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.681758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.681996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.682199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.682219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.682232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.685163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.694459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.694811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.694845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.694862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.695090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.695278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.695296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.695309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.698187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.707638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.707957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.707985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.708000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.708217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.708451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.708473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.708486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.711356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.720781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.721135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.721162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.721177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.721419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.721612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.721646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.721659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.724513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.733966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.734392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.734420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.734437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.734673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.734881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.734901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.734913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.737791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.747070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.747387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.747416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.747432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.747649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.747857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.747877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.202 [2024-07-23 18:22:29.747890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.202 [2024-07-23 18:22:29.751154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.202 [2024-07-23 18:22:29.760159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.202 [2024-07-23 18:22:29.760536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.202 [2024-07-23 18:22:29.760566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.202 [2024-07-23 18:22:29.760582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.202 [2024-07-23 18:22:29.760826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.202 [2024-07-23 18:22:29.761028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.202 [2024-07-23 18:22:29.761048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.761060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.763929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.773241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.773682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.773711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.773728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.773962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.774166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.774186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.774199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.777074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.786286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.786646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.786675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.786691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.786926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.787131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.787151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.787163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.790039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.799477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.799891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.799920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.799935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.800169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.800401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.800422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.800436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.803283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.812605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.813018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.813046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.813062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.813299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.813503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.813523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.813536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.816404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.825716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.826067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.826095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.826116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.826364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.826557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.826576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.826589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.829457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.838952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.839377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.839405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.839420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.839634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.839837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.839857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.839868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.842781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.203 [2024-07-23 18:22:29.852098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.203 [2024-07-23 18:22:29.852423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.203 [2024-07-23 18:22:29.852451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.203 [2024-07-23 18:22:29.852467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.203 [2024-07-23 18:22:29.852683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.203 [2024-07-23 18:22:29.852888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.203 [2024-07-23 18:22:29.852908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.203 [2024-07-23 18:22:29.852920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.203 [2024-07-23 18:22:29.855840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.462 [2024-07-23 18:22:29.865591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.462 [2024-07-23 18:22:29.866059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.462 [2024-07-23 18:22:29.866088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.462 [2024-07-23 18:22:29.866104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.462 [2024-07-23 18:22:29.866352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.462 [2024-07-23 18:22:29.866546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.462 [2024-07-23 18:22:29.866571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.462 [2024-07-23 18:22:29.866584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.462 [2024-07-23 18:22:29.869467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.462 [2024-07-23 18:22:29.878772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.462 [2024-07-23 18:22:29.879129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.462 [2024-07-23 18:22:29.879157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.462 [2024-07-23 18:22:29.879173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.462 [2024-07-23 18:22:29.879423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.462 [2024-07-23 18:22:29.879647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.879667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.879679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.882533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.891971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.892416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.892446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.892463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.892695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.892898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.892917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.892929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.895944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.905253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.905695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.905722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.905737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.905952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.906156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.906174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.906187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.909191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.918700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.919114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.919169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.919185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.919437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.919631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.919651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.919663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.922531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.931818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.932226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.932279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.932295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.932526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.932732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.932752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.932765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.935671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.944988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.945304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.945341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.945358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.945574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.945777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.945797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.945809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.948646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.958170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.958611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.958655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.958671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.958912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.959114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.959134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.959147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.962025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.971428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.971807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.971834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.971849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.972062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.972265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.972285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.972310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.975211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.984722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.985072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.985099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.985114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.985350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.985545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.985565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.985577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:29.988445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:29.997899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:29.998263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:29.998291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:29.998307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:29.998558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.463 [2024-07-23 18:22:29.998803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.463 [2024-07-23 18:22:29.998823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.463 [2024-07-23 18:22:29.998840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.463 [2024-07-23 18:22:30.002210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.463 [2024-07-23 18:22:30.011188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.463 [2024-07-23 18:22:30.011610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.463 [2024-07-23 18:22:30.011653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.463 [2024-07-23 18:22:30.011670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.463 [2024-07-23 18:22:30.011900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.012094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.012115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.012128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.015085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.024500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.024837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.024867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.024886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.025100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.025312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.025344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.025370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.028269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.037801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.038166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.038196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.038213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.038475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.038707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.038728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.038740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.041791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.051340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.051784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.051815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.051831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.052069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.052273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.052292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.052328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.055410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.064562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.065021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.065069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.065087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.065333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.065555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.065583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.065596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.068788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.078204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.078621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.078670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.078686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.078921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.079135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.079157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.079170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.082427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.091901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.092276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.092305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.092361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.092583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.092819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.092840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.092853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.096056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.105423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.105848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.105878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.105894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.106149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.106388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.106412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.106426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.464 [2024-07-23 18:22:30.109742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.464 [2024-07-23 18:22:30.119144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.464 [2024-07-23 18:22:30.119532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.464 [2024-07-23 18:22:30.119581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.464 [2024-07-23 18:22:30.119598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.464 [2024-07-23 18:22:30.119845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.464 [2024-07-23 18:22:30.120049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.464 [2024-07-23 18:22:30.120068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.464 [2024-07-23 18:22:30.120080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.123460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.132664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.133030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.133082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.133099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.133347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.133567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.133589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.133622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.136879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.146323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.146763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.146812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.146828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.147061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.147259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.147279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.147292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.150545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.159834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.160178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.160222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.160238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.160477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.160707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.160728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.160741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.163964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.173141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.173458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.173488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.173504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.173739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.173942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.173962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.173974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.177015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.186486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.186891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.186923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.186940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.187177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.187425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.187450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.187464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.190462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.723 [2024-07-23 18:22:30.199695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.723 [2024-07-23 18:22:30.200077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.723 [2024-07-23 18:22:30.200104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.723 [2024-07-23 18:22:30.200120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.723 [2024-07-23 18:22:30.200330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.723 [2024-07-23 18:22:30.200545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.723 [2024-07-23 18:22:30.200565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.723 [2024-07-23 18:22:30.200578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.723 [2024-07-23 18:22:30.203504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.212960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.213377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.213405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.213422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.213657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.213845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.213863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.213875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.216831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.226086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.226441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.226470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.226486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.226720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.226927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.226946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.226958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.229721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.239176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.239537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.239566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.239582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.239820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.240024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.240043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.240055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.242971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.252260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.252690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.252720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.252737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.252982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.253208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.253228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.253241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.256658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.265470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.265790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.265818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.265833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.266050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.266254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.266273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.266285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.269387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.278762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.279115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.279143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.279158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.279405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.279614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.279633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.279645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.282497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.291989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.292369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.292398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.292414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.292635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.292839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.292858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.292870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.295747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.305001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.305354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.305382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.305397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.305612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.305815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.305834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.305846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.308665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.317969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.318383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.318412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.318433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.318669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.318873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.318892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.318904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.321706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.724 [2024-07-23 18:22:30.330958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.724 [2024-07-23 18:22:30.331309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.724 [2024-07-23 18:22:30.331343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.724 [2024-07-23 18:22:30.331359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.724 [2024-07-23 18:22:30.331590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.724 [2024-07-23 18:22:30.331795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.724 [2024-07-23 18:22:30.331814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.724 [2024-07-23 18:22:30.331826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.724 [2024-07-23 18:22:30.334624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.725 [2024-07-23 18:22:30.343956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.725 [2024-07-23 18:22:30.344370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.725 [2024-07-23 18:22:30.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.725 [2024-07-23 18:22:30.344414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.725 [2024-07-23 18:22:30.344653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.725 [2024-07-23 18:22:30.344856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.725 [2024-07-23 18:22:30.344875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.725 [2024-07-23 18:22:30.344887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.725 [2024-07-23 18:22:30.347774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.725 [2024-07-23 18:22:30.357103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.725 [2024-07-23 18:22:30.357554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.725 [2024-07-23 18:22:30.357583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.725 [2024-07-23 18:22:30.357614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.725 [2024-07-23 18:22:30.357847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.725 [2024-07-23 18:22:30.358050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.725 [2024-07-23 18:22:30.358075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.725 [2024-07-23 18:22:30.358088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.725 [2024-07-23 18:22:30.360927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.725 [2024-07-23 18:22:30.370216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.725 [2024-07-23 18:22:30.370548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.725 [2024-07-23 18:22:30.370576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.725 [2024-07-23 18:22:30.370592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.725 [2024-07-23 18:22:30.370810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.725 [2024-07-23 18:22:30.371013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.725 [2024-07-23 18:22:30.371032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.725 [2024-07-23 18:22:30.371044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.725 [2024-07-23 18:22:30.373961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.983 [2024-07-23 18:22:30.383729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.983 [2024-07-23 18:22:30.384144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.983 [2024-07-23 18:22:30.384174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.983 [2024-07-23 18:22:30.384191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.983 [2024-07-23 18:22:30.384440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.983 [2024-07-23 18:22:30.384655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.983 [2024-07-23 18:22:30.384674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.384701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.387851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.396798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.397269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.397327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.397346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.397589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.397793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.397811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.397824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.400584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.409839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.410304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.410359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.410375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.410617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.410805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.410824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.410835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.413596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.422927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.423313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.423346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.423377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.423615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.423819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.423838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.423850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.426724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.436068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.436477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.436505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.436521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.436736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.436945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.436964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.436976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.439820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.449094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.449452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.449480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.449495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.449729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.449919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.449938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.449949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.452825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.462334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.462682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.462726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.462943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.463162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.463182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.463194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.466069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.475328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.475742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.475769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.475785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.476014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.476217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.476236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.476249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.479189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.488534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.488902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.488931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.488947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.489180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.489416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.489438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.489455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.492276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.501559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.501911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.501939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.501954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.502193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.502409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.502430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.984 [2024-07-23 18:22:30.502442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.984 [2024-07-23 18:22:30.505742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.984 [2024-07-23 18:22:30.514834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.984 [2024-07-23 18:22:30.515147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.984 [2024-07-23 18:22:30.515175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.984 [2024-07-23 18:22:30.515190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.984 [2024-07-23 18:22:30.515461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.984 [2024-07-23 18:22:30.515670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.984 [2024-07-23 18:22:30.515690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.515702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.518537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.527807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.528163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.528191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.528207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.528457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.528681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.528701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.528713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.531568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.540917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.541290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.541338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.541354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.541589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.541791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.541810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.541822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.544621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.553919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.554254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.554312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.554343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.554572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.554775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.554794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.554806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.557623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.566922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.567345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.567374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.567390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.567628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.567832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.567851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.567863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.570739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.580019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.580338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.580367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.580382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.580598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.580807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.580826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.580838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.583686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.592978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.593394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.593422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.593438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.593672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.593875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.593894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.593907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.596783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.606063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.606477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.606507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.606523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.606765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.606967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.606986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.606998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.609899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.619159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.619545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.619573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.619589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.619805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.620009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.620028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.620040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.622807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.985 [2024-07-23 18:22:30.632221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.985 [2024-07-23 18:22:30.632598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.985 [2024-07-23 18:22:30.632642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:22.985 [2024-07-23 18:22:30.632659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:22.985 [2024-07-23 18:22:30.632893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:22.985 [2024-07-23 18:22:30.633095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.985 [2024-07-23 18:22:30.633114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.985 [2024-07-23 18:22:30.633126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.985 [2024-07-23 18:22:30.636036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.244 [2024-07-23 18:22:30.645474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.244 [2024-07-23 18:22:30.645855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.244 [2024-07-23 18:22:30.645885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.244 [2024-07-23 18:22:30.645901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.244 [2024-07-23 18:22:30.646137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.244 [2024-07-23 18:22:30.646390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.244 [2024-07-23 18:22:30.646419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.244 [2024-07-23 18:22:30.646458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.244 [2024-07-23 18:22:30.649599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.244 [2024-07-23 18:22:30.658552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.244 [2024-07-23 18:22:30.658974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.244 [2024-07-23 18:22:30.659004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.244 [2024-07-23 18:22:30.659020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.244 [2024-07-23 18:22:30.659256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.244 [2024-07-23 18:22:30.659491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.244 [2024-07-23 18:22:30.659512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.244 [2024-07-23 18:22:30.659524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.244 [2024-07-23 18:22:30.662391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.244 [2024-07-23 18:22:30.671759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.244 [2024-07-23 18:22:30.672185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.244 [2024-07-23 18:22:30.672218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.244 [2024-07-23 18:22:30.672235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.672519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.672728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.672747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.672760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.675555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.684905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.685267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.685325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.685349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.685617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.685836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.685855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.685867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.688740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.697943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.698284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.698344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.698360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.698589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.698793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.698812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.698825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.701585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.711037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.711451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.711479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.711495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.711730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.711936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.711956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.711968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.714843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.724145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.724513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.724567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.724583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.724809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.724997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.725016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.725027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.727866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.737360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.737833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.737885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.737900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.738123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.738311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.738342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.738356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.741126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.750494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.750903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.750956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.750972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.751217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.751436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.751457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.751469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.754216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.763777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.764097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.764125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.764140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.764390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.764585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.764618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.764631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.767423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.776858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.777221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.777249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.777265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.777546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.777752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.777771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.777783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.780657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.789971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.790349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.790378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.790394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.245 [2024-07-23 18:22:30.790617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.245 [2024-07-23 18:22:30.790821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.245 [2024-07-23 18:22:30.790840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.245 [2024-07-23 18:22:30.790852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.245 [2024-07-23 18:22:30.793691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.245 [2024-07-23 18:22:30.802944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.245 [2024-07-23 18:22:30.803291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.245 [2024-07-23 18:22:30.803325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.245 [2024-07-23 18:22:30.803348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.803579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.803784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.803803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.803815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.806687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.816145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.816492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.816521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.816536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.816768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.816970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.816990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.817002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.819808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.829234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.829615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.829659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.829675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.829909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.830112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.830131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.830142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.833023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.842273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.842656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.842698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.842714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.842940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.843129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.843152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.843164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.846093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.855402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.855741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.855769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.855785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.856006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.856219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.856238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.856250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.859133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.868553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.868906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.868933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.868949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.869183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.869403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.869424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.869436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.872186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.881665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.882016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.882043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.882059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.882273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.882513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.882534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.882548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.885468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.246 [2024-07-23 18:22:30.894708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.246 [2024-07-23 18:22:30.895089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.246 [2024-07-23 18:22:30.895117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.246 [2024-07-23 18:22:30.895132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.246 [2024-07-23 18:22:30.895363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.246 [2024-07-23 18:22:30.895568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.246 [2024-07-23 18:22:30.895587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.246 [2024-07-23 18:22:30.895599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.246 [2024-07-23 18:22:30.898493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.505 [2024-07-23 18:22:30.908110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.505 [2024-07-23 18:22:30.908491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-23 18:22:30.908524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.505 [2024-07-23 18:22:30.908541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.505 [2024-07-23 18:22:30.908789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.505 [2024-07-23 18:22:30.909035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.505 [2024-07-23 18:22:30.909055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.505 [2024-07-23 18:22:30.909067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.505 [2024-07-23 18:22:30.912018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.505 [2024-07-23 18:22:30.921496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.505 [2024-07-23 18:22:30.922003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-23 18:22:30.922057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.505 [2024-07-23 18:22:30.922073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.505 [2024-07-23 18:22:30.922313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.505 [2024-07-23 18:22:30.922547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.505 [2024-07-23 18:22:30.922569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.922583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.925563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:30.934822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:30.935193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:30.935242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:30.935259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:30.935540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:30.935765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:30.935784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.935796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.938735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:30.947925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:30.948367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:30.948396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:30.948412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:30.948667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:30.948855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:30.948873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.948885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.951680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:30.961002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:30.961380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:30.961408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:30.961423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:30.961638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:30.961842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:30.961861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.961873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.964670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:30.974030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:30.974383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:30.974411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:30.974426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:30.974641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:30.974844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:30.974862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.974879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.977754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:30.987190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:30.987637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:30.987679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:30.987695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:30.987928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:30.988131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:30.988151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:30.988162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:30.991032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:31.000262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:31.000640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:31.000683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:31.000699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:31.000935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:31.001138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:31.001156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:31.001168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:31.004115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:31.013314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:31.013632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:31.013659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:31.013674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:31.013889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:31.014128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:31.014149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:31.014161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:31.017571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:31.026499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:31.026932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:31.026966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:31.026983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:31.027217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:31.027446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:31.027466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:31.027479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:31.030345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:31.039688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:31.040100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:31.040128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:31.040144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:31.040389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:31.040599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:31.040619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:31.040645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.506 [2024-07-23 18:22:31.043534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.506 [2024-07-23 18:22:31.052886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.506 [2024-07-23 18:22:31.053238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-23 18:22:31.053265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.506 [2024-07-23 18:22:31.053280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.506 [2024-07-23 18:22:31.053536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.506 [2024-07-23 18:22:31.053746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.506 [2024-07-23 18:22:31.053766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.506 [2024-07-23 18:22:31.053778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.056659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.066082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.066505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.066534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.066551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.066792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.067005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.067024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.067038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.069884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.079336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.079661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.079690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.079706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.079927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.080136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.080155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.080167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.083004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.092505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.092862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.092888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.092903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.093098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.093343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.093364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.093376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.096230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.105723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.106051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.106079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.106095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.106341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.106540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.106560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.106572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.109448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.118894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.119253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.119281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.119297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.119531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.119747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.119767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.119780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.122831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.132178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.132537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.132571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.132606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.132852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.133039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.133058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.133070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.136097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.145393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.145824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.145882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.145908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.146136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.146349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.146370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.146383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.149267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.507 [2024-07-23 18:22:31.158678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.507 [2024-07-23 18:22:31.159040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-23 18:22:31.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.507 [2024-07-23 18:22:31.159093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.507 [2024-07-23 18:22:31.159330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.507 [2024-07-23 18:22:31.159544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.507 [2024-07-23 18:22:31.159564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.507 [2024-07-23 18:22:31.159577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.507 [2024-07-23 18:22:31.162819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.766 [2024-07-23 18:22:31.172045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.766 [2024-07-23 18:22:31.172488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.766 [2024-07-23 18:22:31.172519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.766 [2024-07-23 18:22:31.172536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.766 [2024-07-23 18:22:31.172779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.766 [2024-07-23 18:22:31.172989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.766 [2024-07-23 18:22:31.173008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.766 [2024-07-23 18:22:31.173021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.766 [2024-07-23 18:22:31.176135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.766 [2024-07-23 18:22:31.185358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.766 [2024-07-23 18:22:31.185793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.766 [2024-07-23 18:22:31.185828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.766 [2024-07-23 18:22:31.185844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.766 [2024-07-23 18:22:31.186090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.766 [2024-07-23 18:22:31.186296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.766 [2024-07-23 18:22:31.186340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.766 [2024-07-23 18:22:31.186354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.766 [2024-07-23 18:22:31.189547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.766 [2024-07-23 18:22:31.199017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.766 [2024-07-23 18:22:31.199443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.766 [2024-07-23 18:22:31.199473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.766 [2024-07-23 18:22:31.199489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.766 [2024-07-23 18:22:31.199735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.766 [2024-07-23 18:22:31.199964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.766 [2024-07-23 18:22:31.199989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.766 [2024-07-23 18:22:31.200002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.766 [2024-07-23 18:22:31.203408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.766 [2024-07-23 18:22:31.212640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.766 [2024-07-23 18:22:31.213035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.766 [2024-07-23 18:22:31.213064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.766 [2024-07-23 18:22:31.213081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.766 [2024-07-23 18:22:31.213313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.766 [2024-07-23 18:22:31.213556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.766 [2024-07-23 18:22:31.213579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.766 [2024-07-23 18:22:31.213593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.766 [2024-07-23 18:22:31.216853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.766 [2024-07-23 18:22:31.226375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.766 [2024-07-23 18:22:31.226716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.766 [2024-07-23 18:22:31.226744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.766 [2024-07-23 18:22:31.226759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.766 [2024-07-23 18:22:31.226967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.766 [2024-07-23 18:22:31.227218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.766 [2024-07-23 18:22:31.227238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.766 [2024-07-23 18:22:31.227251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.766 [2024-07-23 18:22:31.230477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.239954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.240292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.240344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.240362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.240577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.240794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.240814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.240827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.244031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.253587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.254007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.254044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.254060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.254282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.254522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.254545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.254559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.257871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.267367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.267806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.267843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.267859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.268103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.268335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.268358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.268372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.271770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.280737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.281101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.281129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.281145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.281395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.281614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.281650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.281664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.284836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.294153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.294507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.294536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.294558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.294809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.294997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.295015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.295027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.298016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.307670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.308035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.308091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.308107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.308386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.308610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.308630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.308644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.311775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.320839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.321158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.321185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.321201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.321444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.321668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.321687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.321699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.324585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.333912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.334261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.334288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.334304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.334566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.334786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.334809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.334821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.337731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.347049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.347405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.347435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.347453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.347694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.347900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.347920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.347932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.350814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.360159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.767 [2024-07-23 18:22:31.360554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.767 [2024-07-23 18:22:31.360583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.767 [2024-07-23 18:22:31.360600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.767 [2024-07-23 18:22:31.360818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.767 [2024-07-23 18:22:31.361021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.767 [2024-07-23 18:22:31.361041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.767 [2024-07-23 18:22:31.361053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.767 [2024-07-23 18:22:31.364040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.767 [2024-07-23 18:22:31.373229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.768 [2024-07-23 18:22:31.373650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.768 [2024-07-23 18:22:31.373680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.768 [2024-07-23 18:22:31.373696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.768 [2024-07-23 18:22:31.373935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.768 [2024-07-23 18:22:31.374140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.768 [2024-07-23 18:22:31.374168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.768 [2024-07-23 18:22:31.374180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.768 [2024-07-23 18:22:31.377053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.768 [2024-07-23 18:22:31.386342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.768 [2024-07-23 18:22:31.386699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.768 [2024-07-23 18:22:31.386727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.768 [2024-07-23 18:22:31.386743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.768 [2024-07-23 18:22:31.386980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.768 [2024-07-23 18:22:31.387182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.768 [2024-07-23 18:22:31.387203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.768 [2024-07-23 18:22:31.387216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.768 [2024-07-23 18:22:31.390128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.768 [2024-07-23 18:22:31.399452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.768 [2024-07-23 18:22:31.399882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.768 [2024-07-23 18:22:31.399911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.768 [2024-07-23 18:22:31.399926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.768 [2024-07-23 18:22:31.400160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.768 [2024-07-23 18:22:31.400407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.768 [2024-07-23 18:22:31.400429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.768 [2024-07-23 18:22:31.400442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.768 [2024-07-23 18:22:31.403310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.768 [2024-07-23 18:22:31.412520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.768 [2024-07-23 18:22:31.412839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.768 [2024-07-23 18:22:31.412867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:23.768 [2024-07-23 18:22:31.412882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:23.768 [2024-07-23 18:22:31.413098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:23.768 [2024-07-23 18:22:31.413302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.768 [2024-07-23 18:22:31.413345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.768 [2024-07-23 18:22:31.413360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.768 [2024-07-23 18:22:31.416135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.026 [2024-07-23 18:22:31.426029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.026 [2024-07-23 18:22:31.426454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-23 18:22:31.426487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.026 [2024-07-23 18:22:31.426504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.026 [2024-07-23 18:22:31.426750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.026 [2024-07-23 18:22:31.426953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.026 [2024-07-23 18:22:31.426973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.026 [2024-07-23 18:22:31.426985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.026 [2024-07-23 18:22:31.430014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.026 [2024-07-23 18:22:31.438994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.026 [2024-07-23 18:22:31.439379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-23 18:22:31.439409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.439425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.439641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.439844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.439864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.439877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.442714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.452016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.452365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.452393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.452408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.452637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.452825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.452845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.452857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.455731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.465104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.465525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.465552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.465567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.465801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.466020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.466041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.466058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.469018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.478385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.478804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.478832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.478848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.479077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.479265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.479284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.479296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.482174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.491518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.491949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.491977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.491993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.492229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.492471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.492493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.492507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.495364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.504492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.504856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.504885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.504901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.505139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.505385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.505407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.505421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.508292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.517607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.517910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.517941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.517957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.518168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.518384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.518405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.518418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.521428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.531105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.531468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.531498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.531514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.531776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.531965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.531985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.531998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.534873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.544204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.544530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.544557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.544573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.544789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.544991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.545011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.545024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.027 [2024-07-23 18:22:31.547902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.027 [2024-07-23 18:22:31.557430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.027 [2024-07-23 18:22:31.557863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-23 18:22:31.557892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.027 [2024-07-23 18:22:31.557908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.027 [2024-07-23 18:22:31.558145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.027 [2024-07-23 18:22:31.558382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.027 [2024-07-23 18:22:31.558404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.027 [2024-07-23 18:22:31.558418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.561274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.570559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.570908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.570937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.570954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.571191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.571443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.571466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.571480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.574364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.583676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.584029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.584059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.584075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.584311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.584517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.584537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.584550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.587435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.596761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.597113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.597141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.597157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.597404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.597597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.597630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.597643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.600500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.609995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.610419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.610449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.610465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.610705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.610908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.610928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.610940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.613811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.623102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.623522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.623550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.623566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.623804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.624007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.624027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.624040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.626914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.636224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.636586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.636613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.636629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.636865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.637067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.637087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.637099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.640014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.649334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.649685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.649713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.649733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.649974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.650176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.650196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.650209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.653088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.662382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.662762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.662789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.662805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.663020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.663223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.663243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.663256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.666145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.028 [2024-07-23 18:22:31.675487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.028 [2024-07-23 18:22:31.675878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-23 18:22:31.675905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.028 [2024-07-23 18:22:31.675920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.028 [2024-07-23 18:22:31.676136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.028 [2024-07-23 18:22:31.676367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.028 [2024-07-23 18:22:31.676388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.028 [2024-07-23 18:22:31.676401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.028 [2024-07-23 18:22:31.679314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.287 [2024-07-23 18:22:31.688679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.287 [2024-07-23 18:22:31.689066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.287 [2024-07-23 18:22:31.689096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.287 [2024-07-23 18:22:31.689112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.287 [2024-07-23 18:22:31.689341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.287 [2024-07-23 18:22:31.689587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.287 [2024-07-23 18:22:31.689633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.287 [2024-07-23 18:22:31.689647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.287 [2024-07-23 18:22:31.692786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.287 [2024-07-23 18:22:31.701741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.287 [2024-07-23 18:22:31.702098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.287 [2024-07-23 18:22:31.702127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.287 [2024-07-23 18:22:31.702143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.287 [2024-07-23 18:22:31.702390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.287 [2024-07-23 18:22:31.702589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.287 [2024-07-23 18:22:31.702610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.287 [2024-07-23 18:22:31.702640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.287 [2024-07-23 18:22:31.705512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.287 [2024-07-23 18:22:31.714970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.287 [2024-07-23 18:22:31.715339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.287 [2024-07-23 18:22:31.715385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.287 [2024-07-23 18:22:31.715403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.287 [2024-07-23 18:22:31.715662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.287 [2024-07-23 18:22:31.715851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.287 [2024-07-23 18:22:31.715871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.287 [2024-07-23 18:22:31.715884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.287 [2024-07-23 18:22:31.718720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.287 [2024-07-23 18:22:31.728009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.287 [2024-07-23 18:22:31.728366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.287 [2024-07-23 18:22:31.728395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.287 [2024-07-23 18:22:31.728411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.287 [2024-07-23 18:22:31.728646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.287 [2024-07-23 18:22:31.728849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.287 [2024-07-23 18:22:31.728870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.287 [2024-07-23 18:22:31.728882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.287 [2024-07-23 18:22:31.731761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.741090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.741409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.741438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.741455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.741677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.741905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.741925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.741939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.744828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.754114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.754474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.754503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.754519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.754758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.754962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.754982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.754995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.757908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.767211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.767573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.767603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.767619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.767856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.768059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.768079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.768091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.771070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.780701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.781139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.781169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.781186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.781447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.781701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.781722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.781735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.784622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.793781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.794147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.794174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.794189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.794430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.794655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.794675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.794687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.797542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.806963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.807312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.807364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.807597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.807801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.807821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.807833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.810711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.820146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.820514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.820543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.820559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.820789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.820979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.820999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.821015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.823890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.833327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.833673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.833700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.833716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.833930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.834134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.834154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.834166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.837076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.846368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.846783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.846812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.846828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.847062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.847265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.847285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.847298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.850171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.859449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.288 [2024-07-23 18:22:31.859868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.288 [2024-07-23 18:22:31.859897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.288 [2024-07-23 18:22:31.859913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.288 [2024-07-23 18:22:31.860153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.288 [2024-07-23 18:22:31.860384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.288 [2024-07-23 18:22:31.860405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.288 [2024-07-23 18:22:31.860418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.288 [2024-07-23 18:22:31.863190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.288 [2024-07-23 18:22:31.872674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.873091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.873144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.873160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.873418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.873632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.873652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.873663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.876516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.289 [2024-07-23 18:22:31.885766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.886261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.886324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.886341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.886585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.886788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.886808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.886820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.889578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.289 [2024-07-23 18:22:31.898866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.899182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.899210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.899225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.899624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.899851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.899871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.899884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.902755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.289 [2024-07-23 18:22:31.912042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.912458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.912486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.912502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.912724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.912927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.912946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.912959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.915881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.289 [2024-07-23 18:22:31.925262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.925668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.925716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.925732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.925956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.926144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.926163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.926176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.929138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.289 [2024-07-23 18:22:31.938687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.289 [2024-07-23 18:22:31.939158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.289 [2024-07-23 18:22:31.939212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.289 [2024-07-23 18:22:31.939228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.289 [2024-07-23 18:22:31.939508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.289 [2024-07-23 18:22:31.939752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.289 [2024-07-23 18:22:31.939772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.289 [2024-07-23 18:22:31.939784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.289 [2024-07-23 18:22:31.942973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:31.952264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:31.952704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:31.952749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:31.952781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:31.953024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:31.953212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:31.953232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:31.953249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:31.956227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:31.965452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:31.965827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:31.965856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:31.965873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:31.966107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:31.966339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:31.966374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:31.966389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:31.969223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:31.978690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:31.979109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:31.979138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:31.979154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:31.979405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:31.979619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:31.979640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:31.979652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:31.982521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:31.991748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:31.992104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:31.992133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:31.992149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:31.992401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:31.992613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:31.992648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:31.992661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:31.995553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:32.004843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:32.005192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:32.005224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:32.005241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:32.005501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:32.005707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:32.005727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:32.005739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:32.008557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:32.017863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:32.018212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:32.018240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:32.018256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:32.018517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:32.018738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:32.018758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:32.018770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:32.021777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:32.031170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:32.031578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:32.031609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:32.031626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:32.031856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:32.032079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:32.032100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:32.032112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:32.034990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.548 [2024-07-23 18:22:32.044373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.548 [2024-07-23 18:22:32.044742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-23 18:22:32.044770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.548 [2024-07-23 18:22:32.044786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.548 [2024-07-23 18:22:32.045001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.548 [2024-07-23 18:22:32.045209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.548 [2024-07-23 18:22:32.045229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.548 [2024-07-23 18:22:32.045241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.548 [2024-07-23 18:22:32.048122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.057371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.057732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.057758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.057773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.057968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.058190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.058211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.058223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.061110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.070429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.070840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.070867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.070883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.071114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.071328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.071363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.071375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.074147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.083658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.084010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.084038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.084054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.084287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.084511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.084531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.084545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.087479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.096736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.097086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.097114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.097130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.097377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.097570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.097599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.097611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.100480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.109942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.110274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.110343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.110359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.110594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.110797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.110816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.110828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.113701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.122991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.123402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.123440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.123457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.123692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.123897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.123916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.123929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.126805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.136043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.136413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.136442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.136478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.136707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.136911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.136931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.136944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.139819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.149057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.149411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.149439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.149455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.149691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.149894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.149915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.149927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.152799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.162221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.162578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.162606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.162622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.162860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.163065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.163085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.163097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.549 [2024-07-23 18:22:32.165975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.549 [2024-07-23 18:22:32.175474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.549 [2024-07-23 18:22:32.175839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.549 [2024-07-23 18:22:32.175866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.549 [2024-07-23 18:22:32.175881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.549 [2024-07-23 18:22:32.176097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.549 [2024-07-23 18:22:32.176314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.549 [2024-07-23 18:22:32.176351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.549 [2024-07-23 18:22:32.176365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.550 [2024-07-23 18:22:32.179178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.550 [2024-07-23 18:22:32.188716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.550 [2024-07-23 18:22:32.189135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.550 [2024-07-23 18:22:32.189164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.550 [2024-07-23 18:22:32.189180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.550 [2024-07-23 18:22:32.189426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.550 [2024-07-23 18:22:32.189636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.550 [2024-07-23 18:22:32.189656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.550 [2024-07-23 18:22:32.189668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.550 [2024-07-23 18:22:32.192527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.550 [2024-07-23 18:22:32.201867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.550 [2024-07-23 18:22:32.202219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.550 [2024-07-23 18:22:32.202246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.550 [2024-07-23 18:22:32.202262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.550 [2024-07-23 18:22:32.202541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.550 [2024-07-23 18:22:32.202780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.550 [2024-07-23 18:22:32.202802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.550 [2024-07-23 18:22:32.202817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.550 [2024-07-23 18:22:32.206067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 [2024-07-23 18:22:32.215173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.215577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.215606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.215637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.810 [2024-07-23 18:22:32.215851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.810 [2024-07-23 18:22:32.216054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.810 [2024-07-23 18:22:32.216073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.810 [2024-07-23 18:22:32.216086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.810 [2024-07-23 18:22:32.218962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 [2024-07-23 18:22:32.228277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.228672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.228717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.810 [2024-07-23 18:22:32.228933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.810 [2024-07-23 18:22:32.229137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.810 [2024-07-23 18:22:32.229157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.810 [2024-07-23 18:22:32.229169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.810 [2024-07-23 18:22:32.232042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2500542 Killed "${NVMF_APP[@]}" "$@" 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2501491 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2501491 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2501491 ']' 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:24.810 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.810 [2024-07-23 18:22:32.241734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.242195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.242247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.242264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.810 [2024-07-23 18:22:32.242493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.810 [2024-07-23 18:22:32.242734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.810 [2024-07-23 18:22:32.242753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.810 [2024-07-23 18:22:32.242766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.810 [2024-07-23 18:22:32.245882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 [2024-07-23 18:22:32.255066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.255421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.255451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.255468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.810 [2024-07-23 18:22:32.255710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.810 [2024-07-23 18:22:32.255919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.810 [2024-07-23 18:22:32.255939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.810 [2024-07-23 18:22:32.255951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.810 [2024-07-23 18:22:32.259065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 [2024-07-23 18:22:32.268582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.268979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.269009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.269025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.810 [2024-07-23 18:22:32.269267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.810 [2024-07-23 18:22:32.269508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.810 [2024-07-23 18:22:32.269531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.810 [2024-07-23 18:22:32.269546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.810 [2024-07-23 18:22:32.272920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.810 [2024-07-23 18:22:32.281984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.810 [2024-07-23 18:22:32.282375] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:24.810 [2024-07-23 18:22:32.282411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.810 [2024-07-23 18:22:32.282440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.810 [2024-07-23 18:22:32.282445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.811 [2024-07-23 18:22:32.282457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.282700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.282892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.282910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.282922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.285879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.295514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.295903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.295931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.295948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.296177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.296426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.296448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.296462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.299622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.308946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.309329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.309358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.309375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.309590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.309827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.309847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.309859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.312972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.811 [2024-07-23 18:22:32.322559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.322989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.323042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.323076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.323349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.323568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.323589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.323603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.326887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.336004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.336392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.336421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.336437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.336674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.336909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.336929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.336942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.340215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.349511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.349898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.349926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.349942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.350176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.350403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.350424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.350437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.351560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:24.811 [2024-07-23 18:22:32.353459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.362965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.363573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.363629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.363649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.363901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.364113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.364132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.364148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.367351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.376515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.376948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.376977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.376994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.377228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.377474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.377504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.377519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.380752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.390118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.390511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.390541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.390558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.390801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.391001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.391020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.391033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.394208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.403767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.404293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.404347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.404368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.811 [2024-07-23 18:22:32.404593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.811 [2024-07-23 18:22:32.404836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.811 [2024-07-23 18:22:32.404857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.811 [2024-07-23 18:22:32.404874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.811 [2024-07-23 18:22:32.408129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.811 [2024-07-23 18:22:32.417293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.811 [2024-07-23 18:22:32.417844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.811 [2024-07-23 18:22:32.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.811 [2024-07-23 18:22:32.417898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.812 [2024-07-23 18:22:32.418134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.812 [2024-07-23 18:22:32.418399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.812 [2024-07-23 18:22:32.418422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.812 [2024-07-23 18:22:32.418439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.812 [2024-07-23 18:22:32.421663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.812 [2024-07-23 18:22:32.430782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.812 [2024-07-23 18:22:32.431172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.812 [2024-07-23 18:22:32.431202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.812 [2024-07-23 18:22:32.431217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.812 [2024-07-23 18:22:32.431441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.812 [2024-07-23 18:22:32.431686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.812 [2024-07-23 18:22:32.431706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.812 [2024-07-23 18:22:32.431720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.812 [2024-07-23 18:22:32.434778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.812 [2024-07-23 18:22:32.444141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.812 [2024-07-23 18:22:32.444505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.812 [2024-07-23 18:22:32.444533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.812 [2024-07-23 18:22:32.444549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.812 [2024-07-23 18:22:32.444764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.812 [2024-07-23 18:22:32.444983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.812 [2024-07-23 18:22:32.445003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.812 [2024-07-23 18:22:32.445018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.812 [2024-07-23 18:22:32.445851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.812 [2024-07-23 18:22:32.445882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.812 [2024-07-23 18:22:32.445895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.812 [2024-07-23 18:22:32.445906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.812 [2024-07-23 18:22:32.445916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.812 [2024-07-23 18:22:32.446097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.812 [2024-07-23 18:22:32.446166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:24.812 [2024-07-23 18:22:32.446169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.812 [2024-07-23 18:22:32.448353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.812 [2024-07-23 18:22:32.457879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.812 [2024-07-23 18:22:32.458406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.812 [2024-07-23 18:22:32.458447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:24.812 [2024-07-23 18:22:32.458467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:24.812 [2024-07-23 18:22:32.458709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:24.812 [2024-07-23 18:22:32.458925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.812 [2024-07-23 18:22:32.458958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.812 [2024-07-23 18:22:32.458975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.812 [2024-07-23 18:22:32.462263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.083 [2024-07-23 18:22:32.472534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.083 [2024-07-23 18:22:32.473152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.083 [2024-07-23 18:22:32.473202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.083 [2024-07-23 18:22:32.473235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.083 [2024-07-23 18:22:32.473559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.083 [2024-07-23 18:22:32.473872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.083 [2024-07-23 18:22:32.473904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.083 [2024-07-23 18:22:32.473932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.083 [2024-07-23 18:22:32.478397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.083 [2024-07-23 18:22:32.487372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.083 [2024-07-23 18:22:32.487948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.083 [2024-07-23 18:22:32.487991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.083 [2024-07-23 18:22:32.488012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.083 [2024-07-23 18:22:32.488245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.083 [2024-07-23 18:22:32.488493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.083 [2024-07-23 18:22:32.488518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.083 [2024-07-23 18:22:32.488536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.083 [2024-07-23 18:22:32.491830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.083 [2024-07-23 18:22:32.501146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.083 [2024-07-23 18:22:32.501642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.083 [2024-07-23 18:22:32.501679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.083 [2024-07-23 18:22:32.501698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.501936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.502152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.502172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.502188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.505437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.514713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.515252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.515292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.515312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.515545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.515779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.515800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.515817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.518983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.528175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.528598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.528641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.528658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.528892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.529105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.529125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.529139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.532260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.541700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.542040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.542068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.542084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.542298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.542525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.542546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.542559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.545818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.555234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.555599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.555627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.555644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.555866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.556085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.556106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:25.084 [2024-07-23 18:22:32.556119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.559377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.568826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.569192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.569220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.569238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.569461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.569693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.569713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.569726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.572982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.579840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.084 [2024-07-23 18:22:32.582477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.582868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.582896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.582912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.583126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.583379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.583400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.583413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.586540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.596001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.596420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.596449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.596465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.596695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.596915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.596935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.596947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.600101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.609754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.610141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.610171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.610188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.610418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.610662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.610684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.610698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.613977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 [2024-07-23 18:22:32.623459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.623976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.624013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.624033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.624272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.624521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.624543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.624559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 Malloc0 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.627849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.637172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.637523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.084 [2024-07-23 18:22:32.637552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d08ed0 with addr=10.0.0.2, port=4420 00:34:25.084 [2024-07-23 18:22:32.637568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ed0 is same with the state(5) to be set 00:34:25.084 [2024-07-23 18:22:32.637797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ed0 (9): Bad file descriptor 00:34:25.084 [2024-07-23 18:22:32.638009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:25.084 [2024-07-23 18:22:32.638029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:25.084 [2024-07-23 18:22:32.638042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:25.084 [2024-07-23 18:22:32.641194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:25.084 [2024-07-23 18:22:32.645612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.084 18:22:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2500826 00:34:25.084 [2024-07-23 18:22:32.650926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:25.084 [2024-07-23 18:22:32.724256] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:35.051 00:34:35.051 Latency(us) 00:34:35.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:35.051 Verification LBA range: start 0x0 length 0x4000 00:34:35.051 Nvme1n1 : 15.00 6728.29 26.28 10281.56 0.00 7502.53 618.95 15631.55 00:34:35.051 =================================================================================================================== 00:34:35.051 Total : 6728.29 26.28 10281.56 0.00 7502.53 618.95 15631.55 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:35.052 rmmod nvme_tcp 00:34:35.052 rmmod nvme_fabrics 00:34:35.052 rmmod nvme_keyring 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2501491 ']' 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2501491 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2501491 ']' 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2501491 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2501491 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2501491' 00:34:35.052 killing process with pid 2501491 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2501491 00:34:35.052 18:22:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2501491 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.052 18:22:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:36.953 00:34:36.953 real 0m22.349s 00:34:36.953 user 0m59.802s 00:34:36.953 sys 0m4.132s 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:36.953 ************************************ 00:34:36.953 END TEST nvmf_bdevperf 00:34:36.953 ************************************ 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.953 ************************************ 00:34:36.953 START TEST nvmf_target_disconnect 00:34:36.953 ************************************ 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:36.953 * Looking for test storage... 00:34:36.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.953 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:36.954 18:22:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:38.853 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.853 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:38.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:38.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:38.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:38.854 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.111 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.111 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:39.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:34:39.112 00:34:39.112 --- 10.0.0.2 ping statistics --- 00:34:39.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.112 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:39.112 00:34:39.112 --- 10.0.0.1 ping statistics --- 00:34:39.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.112 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.112 ************************************ 00:34:39.112 START TEST nvmf_target_disconnect_tc1 00:34:39.112 ************************************ 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:39.112 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.112 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.369 [2024-07-23 18:22:46.777444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.369 [2024-07-23 18:22:46.777507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac7e70 with addr=10.0.0.2, port=4420 00:34:39.369 [2024-07-23 18:22:46.777537] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:39.369 [2024-07-23 18:22:46.777560] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:39.369 [2024-07-23 18:22:46.777574] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:39.369 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:39.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:39.369 Initializing NVMe Controllers 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:39.369 00:34:39.369 real 0m0.091s 00:34:39.369 user 0m0.039s 00:34:39.369 sys 0m0.052s 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:39.369 ************************************ 00:34:39.369 END TEST nvmf_target_disconnect_tc1 00:34:39.369 ************************************ 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.369 ************************************ 00:34:39.369 START TEST nvmf_target_disconnect_tc2 00:34:39.369 ************************************ 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2504650 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2504650 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2504650 ']' 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:39.369 18:22:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.369 [2024-07-23 18:22:46.898143] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:39.369 [2024-07-23 18:22:46.898224] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.369 [2024-07-23 18:22:46.964661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.626 [2024-07-23 18:22:47.058354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.626 [2024-07-23 18:22:47.058411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.626 [2024-07-23 18:22:47.058425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.626 [2024-07-23 18:22:47.058440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.626 [2024-07-23 18:22:47.058450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.626 [2024-07-23 18:22:47.061337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.626 [2024-07-23 18:22:47.061406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.626 [2024-07-23 18:22:47.061475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:39.626 [2024-07-23 18:22:47.061478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.626 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 Malloc0 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 [2024-07-23 18:22:47.236227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 [2024-07-23 18:22:47.264523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2504797 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:39.627 18:22:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.883 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.787 18:22:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2504650 00:34:41.787 18:22:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 [2024-07-23 18:22:49.289551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Write completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.787 Read completed with error (sct=0, sc=8) 00:34:41.787 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 [2024-07-23 18:22:49.289884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 [2024-07-23 18:22:49.290216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Read completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 Write completed with error (sct=0, sc=8) 00:34:41.788 starting I/O failed 00:34:41.788 [2024-07-23 18:22:49.290589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.788 [2024-07-23 18:22:49.290776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.290815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.290972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.291859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.291992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.788 [2024-07-23 18:22:49.292794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.788 qpair failed and we were unable to recover it. 00:34:41.788 [2024-07-23 18:22:49.292941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.292982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.293874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.293901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.294948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.294976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.295891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.295923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.296928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.296956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.297862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.297889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.298011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.298035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.298166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.298193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.298294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.298327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.298437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.298462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.789 qpair failed and we were unable to recover it. 00:34:41.789 [2024-07-23 18:22:49.298553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.789 [2024-07-23 18:22:49.298578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.298698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.298723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.298815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.298843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.298973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.298998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.299915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.299948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.300113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.300141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.300264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.300289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.300410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.300438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.300556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.300582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.300874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.300902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.301902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.301929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.302898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.302923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.303921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.303947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.304029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.304054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.304176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.304202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.790 [2024-07-23 18:22:49.304295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.790 [2024-07-23 18:22:49.304326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.790 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.304456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.304482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.304582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.304620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.304769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.304796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.304884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.304909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.305058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.305083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.305413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.305453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.305585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.305615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.305738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.305764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.305917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.305944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.306946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.306972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.307887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.307915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.308856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.308980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.309883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.309910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.310039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.310065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.310164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.791 [2024-07-23 18:22:49.310190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.791 qpair failed and we were unable to recover it. 00:34:41.791 [2024-07-23 18:22:49.310306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.310339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.310440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.310467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.310566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.310603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.310720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.310746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.310843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.310871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.310988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.311871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.311895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.312778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.312804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.313893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.313919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.314013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.314039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.314153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.792 [2024-07-23 18:22:49.314179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.792 qpair failed and we were unable to recover it. 00:34:41.792 [2024-07-23 18:22:49.314282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.314333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.314463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.314504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.314630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.314657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.314806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.314832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.314928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.314953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.315948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.315975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.316874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.316902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.317850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.317974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.318930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.318955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.319081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.319108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.319217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.319243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.793 [2024-07-23 18:22:49.319367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.793 [2024-07-23 18:22:49.319394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.793 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.319529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.319557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.319654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.319679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.319802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.319829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.319921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.319946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.320893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.320980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.321128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.321270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.321460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.321614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.321836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.321898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.322937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.322964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.323841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.323868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.324849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.324875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.325024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.794 [2024-07-23 18:22:49.325051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.794 qpair failed and we were unable to recover it. 00:34:41.794 [2024-07-23 18:22:49.325167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.325193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.325342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.325369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.325532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.325572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.325702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.325732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.325879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.325907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.326936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.326963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.327958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.327983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.328836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.328990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.329968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.329995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.330185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.795 qpair failed and we were unable to recover it. 00:34:41.795 [2024-07-23 18:22:49.330360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.795 [2024-07-23 18:22:49.330402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.330533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.330562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.330761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.330814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.330990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.331245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.331416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.331593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.331743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.331916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.331943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.332088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.332114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.332261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.332288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.332441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.332481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.332632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.332660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.332830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.332874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.333897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.333926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.334876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.334902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.335856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.335980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.336007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.336119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.336157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.796 [2024-07-23 18:22:49.336305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.796 [2024-07-23 18:22:49.336346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.796 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-23 18:22:49.336496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-23 18:22:49.336524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-23 18:22:49.336623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-23 18:22:49.336649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-23 18:22:49.336780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-23 18:22:49.336807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.797 [2024-07-23 18:22:49.336929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.797 [2024-07-23 18:22:49.336957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.797 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.337902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.337995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.338942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.338969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.339896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.339989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.340940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.340968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.341063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.341088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.341179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.341204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.341328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.798 [2024-07-23 18:22:49.341353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.798 qpair failed and we were unable to recover it. 00:34:41.798 [2024-07-23 18:22:49.341484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.341522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.341655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.341687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.341872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.341937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.342873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.342900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.343869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.343921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.344874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.344902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.345857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.345884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.346031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.346201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.346365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.346543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.799 [2024-07-23 18:22:49.346800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.799 qpair failed and we were unable to recover it. 00:34:41.799 [2024-07-23 18:22:49.346909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.346951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.347937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.347988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.348934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.348959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.349950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.349975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.350835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.350980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.351865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.351892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.352085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.352114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.352241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.352391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.352418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.352542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.352569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.800 [2024-07-23 18:22:49.352707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.800 [2024-07-23 18:22:49.352734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.800 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.352859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.352886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.353962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.353988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.354964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.354991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.355852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.355974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.356150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.356274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.356428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.356595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.356858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.356886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.357856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.801 [2024-07-23 18:22:49.358784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.801 [2024-07-23 18:22:49.358811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.801 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.358937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.358963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.359938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.359966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.360855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.360883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.361966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.361993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.362932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.362958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.363053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.363079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.363205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.363235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.363365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.363495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.363522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.802 [2024-07-23 18:22:49.363650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.802 [2024-07-23 18:22:49.363678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.802 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.363831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.363858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.363978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.364915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.364942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.365922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.365949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.366866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.366894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.367906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.367931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.368885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.369039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.369068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.369172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.369198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.803 [2024-07-23 18:22:49.369328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.803 [2024-07-23 18:22:49.369356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.803 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.369475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.369501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.369623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.369650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.369758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.369785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.369933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.369961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.370907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.370960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.371949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.371975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.372863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.372986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.373898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.373923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.804 [2024-07-23 18:22:49.374748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.804 qpair failed and we were unable to recover it. 00:34:41.804 [2024-07-23 18:22:49.374872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.374899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.375831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.375897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.376973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.376999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.377923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.377950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.378901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.378933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.379896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.379922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.380074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.380236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.380366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.380493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.805 [2024-07-23 18:22:49.380643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.805 qpair failed and we were unable to recover it. 00:34:41.805 [2024-07-23 18:22:49.380796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.380824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.381822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.381884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.382862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.382986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.383952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.384890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.384990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.385889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.386035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.386063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.386154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.386181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.386270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.386295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.386426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.806 [2024-07-23 18:22:49.386453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-23 18:22:49.386567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.386594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.386694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.386719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.386841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.386868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.386992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.387967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.387994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.388879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.388906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.389970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.389997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.390865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.390986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.391012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.391161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.391188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.391335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.391378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-23 18:22:49.391468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.807 [2024-07-23 18:22:49.391496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.391617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.391644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.391767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.391793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.392963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.392991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.393877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.393905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.394839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.394976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.395871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.395897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.396883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.396975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.808 [2024-07-23 18:22:49.397001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-23 18:22:49.397119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.397273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.397431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.397548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.397700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.397851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.397877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.398895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.398995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.399932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.399960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.400814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.400972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.401899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.401926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.402070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.402244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.402372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.402544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.809 [2024-07-23 18:22:49.402694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.809 qpair failed and we were unable to recover it. 00:34:41.809 [2024-07-23 18:22:49.402825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.402851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.403897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.403925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.404866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.404894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.405873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.405964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.406900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.406956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.407144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.407200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.407329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.407367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.407516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.407542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.407733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.407795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.408084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.408198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.408431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.408460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.408598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.408749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.408775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.408874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.408902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.409052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.409107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.810 [2024-07-23 18:22:49.409254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.810 [2024-07-23 18:22:49.409281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.810 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.409454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.409496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.409633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.409664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.409861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.409901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.410114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.410168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.410270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.410296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.410438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.410465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.410619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.410686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.410931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.410982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.411912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.411938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.412933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.412959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.413135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.413277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.413414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.413543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.413699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.413961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.414026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.414212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.414375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.414402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.414527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.414553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.414751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.414808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.414990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.415133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.415303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.415475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.415651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.415847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.415910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.416146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.811 [2024-07-23 18:22:49.416195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.811 qpair failed and we were unable to recover it. 00:34:41.811 [2024-07-23 18:22:49.416290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.416323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.416452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.416479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.416600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.416625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.416790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.416844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.417796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.417983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.418262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.418420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.418573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.418722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.418872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.418899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.419162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.419227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.419457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.419485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.419575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.419725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.419752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.419883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.419909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.420905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.420931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.421056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.421082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.421199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.421225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.812 [2024-07-23 18:22:49.421350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.812 [2024-07-23 18:22:49.421377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.812 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.421472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.421498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.421587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.421613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.421726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.421752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.421839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.421865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.421963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.421990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.422146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.422338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.422471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.422655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.422831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.422960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.423955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.423983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.424102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.424224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.424414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.424597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.424743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.424998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.425063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.425283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.425309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.425437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.425464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.425563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.425589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.425897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.425961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.426156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.426313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.426477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.426629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.426780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.426993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.427041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.427167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.427194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.427333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.427361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.427484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.427511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.427640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.813 [2024-07-23 18:22:49.427667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.813 qpair failed and we were unable to recover it. 00:34:41.813 [2024-07-23 18:22:49.427761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.427789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.427923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.427963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.428121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.428148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.428286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.428334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.428467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.428495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.428719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.428785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.429128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.429194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.429364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.429392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.429518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.429546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.429689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.429744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.429948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.430119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.430237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.430413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.430558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.430760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.430825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.431127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.431193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.431432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.431460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.431561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.431588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.431742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.431768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.431925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.431952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.432160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.432229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.432428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.432456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.432586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.432613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.432863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.432929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.433233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.433298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.433486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.433514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.433661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.433687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.433832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.433859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.433954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.434150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.434214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.434422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.434463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.434603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.434631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.434765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.434792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.434885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.434912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.435130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.435185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.435285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.435325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.435423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.814 [2024-07-23 18:22:49.435450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.814 qpair failed and we were unable to recover it. 00:34:41.814 [2024-07-23 18:22:49.435599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.435625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.435749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.435776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.435901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.435928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.436860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.436983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.437095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.437255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.437411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.437569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.437694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.437756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.438919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.438945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.439134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.439198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:41.815 qpair failed and we were unable to recover it. 00:34:41.815 [2024-07-23 18:22:49.439380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.815 [2024-07-23 18:22:49.439408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-23 18:22:49.439499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-23 18:22:49.439526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-23 18:22:49.439688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-23 18:22:49.439727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-23 18:22:49.439973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.440115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.440254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.440404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.440572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.440842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.440894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.441887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.441914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.442184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.442248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.442361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.442389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.442526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.442553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.442674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.442701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.442786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.442813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.443885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.443950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.444149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.444215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.444388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.444416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.444565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.444591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.444696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.444724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.444842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.444868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.445095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-23 18:22:49.445159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-23 18:22:49.445398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.445426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.445526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.445554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.445726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.445792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.446093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.446157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.446377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.446405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.446527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.446699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.446725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.446873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.446938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.447207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.447272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.447487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.447513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.447634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.447666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.447786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.447813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.447933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.447960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.448134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.448458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.448486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.448613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.448640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.448766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.448793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.448911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.448938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.449951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.449978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.450955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.450982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.451102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.451129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-23 18:22:49.451394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-23 18:22:49.451421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.451541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.451567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.451693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.451721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.451841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.451868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.451964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.451992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.452932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.452959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.453056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.453117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.453341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.453368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.453485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.453512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.453665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.453833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.453903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.454224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.454291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.454482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.454633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.454659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.454758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.454789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.454877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.454904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.455111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.455176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.455422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.455450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.455574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.455601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.455699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.456029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.456093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.456309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.456342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.456467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.456493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.456674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.456736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.456990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-23 18:22:49.457306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-23 18:22:49.457338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.457481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.457507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.457605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.457632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.457768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.457794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.458116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.458181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.458388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.458416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.458509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.458535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.458657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.458683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.458819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.458845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.459040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.459105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.459372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.459399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.459514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.459683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.459710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.459831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.459894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.460198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.460264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.460492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.460519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.460666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.460692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.460875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.460941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.461202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.461266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.461444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.461472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.461599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.461625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.461749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.461775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.461919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.461945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.462251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.462365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.462490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.462517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.462676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.462702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.462915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.462981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.463313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.463388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.463505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.463531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.463652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.463684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.463905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.463970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.464279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-23 18:22:49.464375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-23 18:22:49.464483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.464508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.464801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.464866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.465144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.465211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.465532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.465598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.465912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.465977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.466301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.466384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.466702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.466766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.467025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.467090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.467363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.467431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.467733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.467797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.468030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.468095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.468367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.468434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.468706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.468771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.469042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.469107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.469421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.469486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.469797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.470165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.470229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.470580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.470647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.470955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.471019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.471350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.471416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.471694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.471759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.472015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.472083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.472400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.472467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.472738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.472802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.473125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.473191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.473497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.473564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.473875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.473939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.474208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.474272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.474603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.474669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.474948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.475012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.475284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.475360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.475675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.475739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-23 18:22:49.476014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-23 18:22:49.476079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.476399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.476465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.476744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.476808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.477091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.477156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.477420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.477486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.477719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.477794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.478102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.478167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.478384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.478450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.478758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.478823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.479146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.479211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.479543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.479610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.479912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.479976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.480252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.480334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.480654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.480719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.481033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.481099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.481365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.481431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.481705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.481770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.482040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.482106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.482367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.482433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.482766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.482831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.483143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.483208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.483534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.483600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.483916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.483981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.484251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.484329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.484621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.484995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.485060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.485357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.485425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.485734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.485799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.486110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.486175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.486501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-23 18:22:49.486568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-23 18:22:49.486854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.486920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.487230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.487296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.487637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.487702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.487965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.488033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.488344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.488410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.488732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.488798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.489103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.489168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.489412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.489446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.489591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.489623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.489742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.489775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.489956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.489989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.490183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.490248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.490537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.490604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.490912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.490976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.491281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.491382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.491610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.491686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.491990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.492055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.492369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.492435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.492711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.492775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.493078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.493143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.493412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.493478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.493708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.493773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.494076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.494140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.494450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.494516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.494777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.494844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.495154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.495218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.495496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.495563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.495881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.495946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.496255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.496336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.496629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.496953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.497017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.497286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.497366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.497659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.497724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.498009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.498072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-23 18:22:49.498394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-23 18:22:49.498460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.498767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.498832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.499148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.499213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.499546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.499612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.499879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.499943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.500250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.500328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.500644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.500708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.501023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.501087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.501421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.501489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.501744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.501808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.502112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.502177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.502445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.502478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.502632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.502666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.502810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.502843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.503016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.503049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.503191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.503366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.503399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.503515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.503548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.503812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.503877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.504156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.504224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.504558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.504624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.504939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.505013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.505335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.505400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.505721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.505786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.506109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.506173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.506453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.506520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.506737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.506802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.507072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.507136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.507451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.507516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.507847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.507911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.508215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.508280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.508561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.508626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.508884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.509259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.509335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.509573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.509638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.509972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.510373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.510439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-23 18:22:49.510752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-23 18:22:49.510817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.511135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.511201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.511527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.511593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.511907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.511972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.512290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.512368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.512646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.512712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.513017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.513082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.513401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.513467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.513791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.513855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.514167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.514232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.514538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.514604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.514857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.515227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.515291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.515626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.515692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.515998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.516063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.516351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.516418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.516682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.516747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.517011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.517078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.517395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.517461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.517771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.517836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.518157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.518223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.518559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.518626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.518898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.518963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.519242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.519307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.519600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.519678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.519916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.519984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.520301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.520384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.520679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.520731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.520990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.521059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.521308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.521394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.521675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.521735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.522026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.522082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.522382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.522440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.522667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.522723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.522951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.523007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.523210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.523265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.523520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.523576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-23 18:22:49.523817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-23 18:22:49.523872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.524109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.524163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.524363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.524419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.524618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.524672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.524876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.524931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.525164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.525218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.525484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.525539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.525773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.525828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.526056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.526111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.526344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.526404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.526598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.526917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.526972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.527240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.527300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.527564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.527619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.527866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.527951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.528254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.528312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.528620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.528676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.528911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.528970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.529243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.529302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.529566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.529629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.529831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.529886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.530114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.530168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.530412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.530470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.530707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.530761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.530989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.531068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.531279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.531387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.531664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.531719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.531935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.532000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.532264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.532334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.532630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.532685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.532963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.533017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.533238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.533292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.533562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.533619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.533865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.533919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.534146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.534200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.534447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.534504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.534695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-23 18:22:49.534750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-23 18:22:49.534998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.535052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.535232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.535286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.535566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.535629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.535868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.535922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.536123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.536178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.536410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-23 18:22:49.536466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-23 18:22:49.536757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.536811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.537024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.537078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.537295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.537365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.537566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.537624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.537856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.537910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.538104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.538178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.538482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.538539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.538772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.538829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.539075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.539130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.539338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.539670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.539725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.539959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.540014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.540217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.540272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.540555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.540611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.540841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.540915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.541210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.541284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.541526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.541581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.541838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.541896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.542195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.542262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.542604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.542681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.542939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.543015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.543265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.543338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.543608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.543669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.543912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.543973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.544293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.544379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.544696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.544754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.545054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.545129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.545372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.545429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.545720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.545779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.546022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.546081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.546375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.546431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.546688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.546750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.547047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.547107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.547414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-23 18:22:49.547738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-23 18:22:49.547813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.548136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.548191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.548458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.548513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.548808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.548885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.549182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.549247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.549544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.549600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.549914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.549978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.550289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.550364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.550616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.550691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.550946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.551008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.551279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.551347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.551650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.551709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.551921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.551981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.552238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.552295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.552551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.552610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.552901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.552960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.553190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.553245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.553497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.553554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.553741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.553798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.554080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.554144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.554404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.554461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.554662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.554718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.554966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.555025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.555241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.555297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.555572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.555627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.555917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.555976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.556284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.556377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.556563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.556617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.556864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.556919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.557170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.557225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.557473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.557538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.557784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.557839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.558110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.558165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.558372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.558428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.558662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-23 18:22:49.558717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-23 18:22:49.558940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.558995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.559262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.559327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.559511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.559566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.559753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.559808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.560033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.560089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.560281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.560348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.560621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.560676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.560862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.560920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.561188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.561243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.561502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.561559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.561797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.561852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.562075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.562129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.562365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.562422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.562652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.562707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.562935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.562989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.563216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.563272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.563528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.563584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.563808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.563862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.564144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.564199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.564393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.564449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.564672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.564727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.564959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.565016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.565267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.565333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.565614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.565668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.565905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.566153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.566208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.566447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.566504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.566740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.566794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.567065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.567120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.567390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.567447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.567642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.567698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.567964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.568019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.568256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.568311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.568582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.568638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.568887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.568941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-23 18:22:49.569207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-23 18:22:49.569270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.569480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.569535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.569772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.569827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.570110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.570163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.570428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.570484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.570748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.570803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.571035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.571090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.571341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.571396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.571637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.571691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.571891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.571948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.572176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.572231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.572485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.572542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.572775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.572829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.573094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.573149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.573396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.573453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.573724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.573779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.574004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.574058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.574337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.574394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.574586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.574641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.574867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.574921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.575153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.575209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.575478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.575534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.575727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.575782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.576010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.576289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.576353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.576597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.576652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.576838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.576893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.577126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.577182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.577411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.577467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.577697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.577752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.578000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.578054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.578338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.578394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.578614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.578669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.578894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.578949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.579172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.579226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.579483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.579538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.579769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.579823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-23 18:22:49.580017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-23 18:22:49.580071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.580299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.580387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.580663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.580717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.580913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.580977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.581155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.581210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.581450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.581507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.581742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.581797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.582033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.582089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.582358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.582414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.582618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.582674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.582896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.582951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.583189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.583244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.583524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.583814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.583869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.584105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.584160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.584416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.584473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.584750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.584804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.585056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.585111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.585312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.585380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.585658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.585713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.585941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.585996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.586211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.586265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.586501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.586559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.586838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.586893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.587170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.587224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.587471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.587528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.587725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.587780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.587990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.588045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.588234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.588292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.588596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.588651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.588866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.588922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.589125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-23 18:22:49.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-23 18:22:49.589436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.589492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.589720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.589775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.589967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.590023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.590225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.590281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.590531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.590586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.590812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.590866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.591050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.591105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.591334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.591392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.591634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.591688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.591886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.591940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.592169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.592224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.592441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.592497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.592777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.592832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.593028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.593084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.593351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.593408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.593593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.593648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.593844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.593899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.594178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.594233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.594442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.594498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.594769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.594823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.595054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.595111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.595380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.595457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.595736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.595791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.596063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.596118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.596346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.596402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.596687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.596742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.596974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.597028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.597253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.597308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.597545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.597601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.597866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.597920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.598149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.598203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.598472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.598529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.598757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.598811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.599084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.599139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.599349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.599412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.599607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.599664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.599901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-23 18:22:49.599955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-23 18:22:49.600194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.600249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.600463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.600528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.600752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.600808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.601037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.601092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.601341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.601406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.601587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.601643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.601872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.601927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.602170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.602224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.602506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.602561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.602804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.602858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.603088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.603143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.603377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.603433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.603736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.603957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.604012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.604241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.604295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.604567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.604623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.604837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.604892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.605165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.605218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.605514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.605571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.605802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.605859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.606129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.606184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.606428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.606484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.606755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.606810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.607046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.607101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.607333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.607390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.607595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.607650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.607849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.607903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.608095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.608152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.608409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.608466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.608652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.608707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.608917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.608972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.609158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.609212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.609410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.609466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.609705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.609760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.610005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-23 18:22:49.610062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-23 18:22:49.610263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.610329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.610628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.610683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.610888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.610943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.611160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.611214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.611451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.611507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.611720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.611774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.612003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.612066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.612345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.612406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.612662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.612716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.612904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.612959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.613189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.613243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.613496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.613552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.613766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.613821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.614100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.614155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.614438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.614494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.614792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.614847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.615036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.615091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.615340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.615396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.615629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.615684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.615882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.615936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.616214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.616269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.616513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.616571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.616841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.616896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.617125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.617179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.617421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.617478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.617723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.617778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.617998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.618053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.618341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.618398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.618696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.618751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.618965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.619020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.619287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.619353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.619643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.619698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.619937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.619992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.620236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.620291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.620545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.620600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.620834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.620889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-23 18:22:49.621076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-23 18:22:49.621130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.621396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.621453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.621672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.621727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.621921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.621976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.622156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.622212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.622435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.622491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.622699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.622754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.623017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.623073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.623300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.623367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.623597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.623652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.623920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.623982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.624178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.624235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.624446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.624502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.624742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.624797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.624993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.625048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.625238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.625293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.625547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.625602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.625833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.625888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.626064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.626119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.626351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.626408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.626585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.626642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.626843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.626897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.627128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.627183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.627454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.627510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.627735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.627790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.628009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.628064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.628339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.628395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.628665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.628719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.628934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.628988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.629268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.629348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.629581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.629636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.629880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.629934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.630204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.630258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.630495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.630551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.630770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.630825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.631033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-23 18:22:49.631087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-23 18:22:49.631360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.631417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.631660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.631716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.631983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.632037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.632334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.632391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.632669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.632723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.632926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.632980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.633162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.633217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.633459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.633515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.633784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.633839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.634077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.634132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.634335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.634392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.634603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.634657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.634879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.634937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.635194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.635255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.635546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.635610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.635818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.635873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.636107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.636162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.636382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.636448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.636710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.636766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.636990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.637047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.637230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.637564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.637624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.637819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.637876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.638168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.638234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.638473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.638538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.638794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.638849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.639121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.639389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.639457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.639717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.639772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.640000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.640054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-23 18:22:49.640286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-23 18:22:49.640367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.640652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.640707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.640892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.640946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.641227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.641307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.641592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.641647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.641919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.641973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.642192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.642257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.642478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.642543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.642863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.642917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.643145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.643201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.643503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.643561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.643820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.644149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.644205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.644405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.644462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.644713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.645029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.645083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.645293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.645359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.645632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.645697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.645992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.646047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.646270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.646335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.646581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.646635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.646931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.646995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.647256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.647311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.647619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.647683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.647947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.648040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.648295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.648364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.648539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.648594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.648931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.649005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.649194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.649249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.649524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.649591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.649855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.649913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.650150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.650205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.650408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.650464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.650662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.650716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.650903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.650960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-23 18:22:49.651252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-23 18:22:49.651328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.651620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.651675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.651905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.651962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.652294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.652371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.652607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.652671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.652935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.652989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.653220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.653296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.653595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.653658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.653921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.653976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.654195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.654249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.654509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.654574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.654846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.654919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.655149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.655203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.655457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.655523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.655834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.655898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.656158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.656212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.656497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.656562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.656797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.656860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.657138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.657192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.657431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.657486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.657780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.657844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.658056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.658110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.658293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.658358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.658570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.658635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.658938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.659011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.659249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.659304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.659635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.659698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.660009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.660072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.660350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.660406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.660640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.660702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.660946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.661009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.661359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.661415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.661612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.661868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.661923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.662187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.662241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.662499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.662555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.662788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.662852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-23 18:22:49.663071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-23 18:22:49.663125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.663363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.663419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.663701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.663755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.663955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.664012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.664258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.664350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.664552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.664606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.664868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.664924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.665198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.665255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.665507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.665563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.665788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.665852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.666140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.666194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.666434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.666491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.666719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.666773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.667038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.667092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.667371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.667426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.667674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.667728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.667970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.668024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.668292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.668368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.668621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.668675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.668910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.668966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.669236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.669308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.669571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.669625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.669821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.669876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.670069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.670125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.670329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.670386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.670616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.670679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.670977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.671032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.671227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.671282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.671612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.671667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.671864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.672230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.672307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.672519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.672573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.672769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.672831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.673034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-23 18:22:49.673090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-23 18:22:49.673384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.673450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.673732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.673787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.674058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.674140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.674441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.674498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.674695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.674748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.674955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.675009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.675343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.675409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.675677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.675731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.675999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.676053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.676289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.676388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.676634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.676688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.676878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.676933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.677156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.677219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.678305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.678412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.678618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.678865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.678911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.679139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.679184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.679430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.679475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.679664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.679707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.679873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.679916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.680096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.680138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.680326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.680379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.680594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.680637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.680786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.680829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.681044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.681086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.681335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.681407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.681579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.681625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.681814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.681860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.682076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.682119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.682336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.682388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.682589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.682631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.682851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.682894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.683039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.683081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.683224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.683266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.683458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.683657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.683707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.683853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-23 18:22:49.683895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-23 18:22:49.684075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.684116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.684279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.684336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.684496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.684537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.684719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.684758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.684897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.684937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.685113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.685152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.685300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.685366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.685543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.685582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.685770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.685808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.685946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.685983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.686154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.686192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.686376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.686413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.686545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.686587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.686749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.686785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.687002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.687057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.687359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.687399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.687572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.687610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.687881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.687944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.688163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.688227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.688442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.688481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.688650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.688687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.688856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.688895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.689207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.689270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.689498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.689537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.689773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.689839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.690122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.690187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.690466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.690505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.690634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.690688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.691018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.691082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-23 18:22:49.691355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-23 18:22:49.691411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.691574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.691769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.691806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.691988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.692151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.692329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.692523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.692724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.692922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.692966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.693130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.693167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.693296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.693345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.693508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.693545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.693672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.693718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.693908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.693944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.694093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.694129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.694282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.694327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.694492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.694528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.694662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.694698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.694853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.694890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.695032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.695068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.695233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.695269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.695444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.695481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.695632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.695668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.695865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.695903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.696063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.696099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.696287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.696331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.696482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.696518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.696711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.696748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.696907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.696942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.697098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.697134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.697300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.697354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.697535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.697573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.697765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.697801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.697962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.697997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.698192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.698228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.698410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.698449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.698580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-23 18:22:49.698617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-23 18:22:49.698808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.698845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.699003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.699040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.699228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.699282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.699497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.699712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.699749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.699887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.699924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.700048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.700085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.700249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.700287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.700459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.700497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.700697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.700734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.700926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.700963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.701084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.701118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.701289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.701332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.701503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.701537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.701694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.701729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.701868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.701903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.702111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.702268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.702455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.702656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.702822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.702981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.703016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.703182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.703217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.703369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.703404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.703530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.703565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.703683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.703718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.704331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.704368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.704536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.704570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.704706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.704740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.704899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.704941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.708332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.708376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.708549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.708582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.708718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.708752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.708912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.708946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.709077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.709110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-23 18:22:49.709268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-23 18:22:49.709302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.709447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.709481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.709612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.709646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.709794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.709827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.709982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.710171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.710385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.710548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.710766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.710916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.710948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.711097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.711129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.711310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.711350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.711492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.711524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.711704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.711737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.711863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.711896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.712958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.712992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.713172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.713205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.713391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.713425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.713575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.713608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.713772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.713805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.713935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.713967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.715334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.715384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.715541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.715585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.715755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.715796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.716089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.716131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.716336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.716378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.716545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.716586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.716759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.716800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.716965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.717021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.717217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.717274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.717461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.717511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.717667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.717701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-23 18:22:49.717871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-23 18:22:49.717903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.718035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.718069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.718240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.718280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.718446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.718480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.718655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.718687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.718829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.718860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.719025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.719062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.719230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.719268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.719449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.719481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.719632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.719664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.719861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.719900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.720044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.720089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.720287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.720335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.720478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.720510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.720660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.720711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.720874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.720912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.721082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.721119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.721284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.721333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.721504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.721536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.721712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.721743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.721854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.721886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.722027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.722060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.722189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.722225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.722367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.722416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.722562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.722594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.722776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.723932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.723964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.724077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.724109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.724254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.724285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.724428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.724460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.724577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-23 18:22:49.724609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-23 18:22:49.724788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.724819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.724971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.725179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.725353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.725521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.725699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.725905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.725937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.726140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.726267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.726480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.726655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.726861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.726973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.727159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.727363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.727522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.727734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.727937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.727968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.728159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.728361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.728411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.728561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.728592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.728734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.728765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.728936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.728967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.729072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.729104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.729273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.729305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.729509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1ef0 is same with the state(5) to be set 00:34:42.119 [2024-07-23 18:22:49.729703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.729756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.729958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.730017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.730215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.730277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.730414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.730457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.730636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.730668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.730808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.730840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-23 18:22:49.731007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-23 18:22:49.731044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.731265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.731323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.731477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.731511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.731631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.731663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.731813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.731848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.731960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.731992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.732136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.732172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.732313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.732375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.732522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.732554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.732677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.732708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.732862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.732903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.733051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.733092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.733273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-23 18:22:49.733305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-23 18:22:49.733433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.733466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.733613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.733645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.733782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.733818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.733973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.734009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.734139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.734186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.734400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.734444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.734585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.734627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.734762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.734806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.734992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.735041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.735203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.735248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.735471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.735517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.735707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.735757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.735980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.736042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.736199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.736241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.736414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.736457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.736617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.736672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.736883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.736941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.737127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.737183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.737346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.737386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.737546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.737605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.737763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.737804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.737932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.737972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.738135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.738175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.738334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.738370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.397 [2024-07-23 18:22:49.738512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.397 [2024-07-23 18:22:49.738544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.397 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.738661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.738693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.738812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.738989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.739880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.739999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.740150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.740355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.740502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.740673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.740881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.740912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.741857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.741889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.742895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.742925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.743939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.743971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.744116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.744146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.744250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.744280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.398 qpair failed and we were unable to recover it. 00:34:42.398 [2024-07-23 18:22:49.744430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.398 [2024-07-23 18:22:49.744461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.744603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.744736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.744766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.744932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.744962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.745110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.745140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.745308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.745354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.745498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.745529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.745697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.745727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.745861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.745891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.746815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.746844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.747966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.747996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.748879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.748908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.749862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.749891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.750053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.750082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.399 qpair failed and we were unable to recover it. 00:34:42.399 [2024-07-23 18:22:49.750244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.399 [2024-07-23 18:22:49.750273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.750426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.750455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.750582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.750610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.750738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.750766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.750920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.750948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.751873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.751902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.752850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.752878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.753838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.753995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.754926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.754953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.755951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.400 [2024-07-23 18:22:49.755983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.400 qpair failed and we were unable to recover it. 00:34:42.400 [2024-07-23 18:22:49.756140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.756167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.756340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.756366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.756521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.756548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.756695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.756722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.756823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.756850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.757858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.757884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.758919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.758945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.759911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.759937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.760924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.760950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.761101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.761126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.401 [2024-07-23 18:22:49.761274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.401 [2024-07-23 18:22:49.761299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.401 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.761455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.761480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.761637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.761663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.761787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.761812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.761933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.761958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.762925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.762950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.763942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.763967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.764901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.765042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.765068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.765196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.765222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.765347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.765373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.765464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.402 [2024-07-23 18:22:49.765489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.402 qpair failed and we were unable to recover it. 00:34:42.402 [2024-07-23 18:22:49.765605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.765630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.765722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.765747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.765876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.765987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.766880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.766906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.767847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.767872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.768887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.768913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.769907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.769932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.403 [2024-07-23 18:22:49.770817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.403 qpair failed and we were unable to recover it. 00:34:42.403 [2024-07-23 18:22:49.770902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.771862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.771888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.772890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.772915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.773863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.773889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.774955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.774980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.404 [2024-07-23 18:22:49.775881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.404 [2024-07-23 18:22:49.775906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.404 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.775988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.776945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.776970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.777845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.777870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.778922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.778947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.779865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.779891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.780865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.780898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.405 [2024-07-23 18:22:49.781071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.405 [2024-07-23 18:22:49.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.405 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.781960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.781986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.782949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.782975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.783916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.783942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.784889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.784916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.785892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.785988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.786013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.406 [2024-07-23 18:22:49.786166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.406 [2024-07-23 18:22:49.786194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.406 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.786325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.786352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.786473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.786499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.786615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.786641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.786739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.786766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.786883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.786908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.787870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.787895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.788897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.788984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.789895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.789921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.790056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.790082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.790202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.790228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.790350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-23 18:22:49.790376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-23 18:22:49.790500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.790525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.790622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.790647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.790764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.790789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.790887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.790912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.791890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.791917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.792915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.792940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.793909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.793935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.794942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.794972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.795092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.795118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.795210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.795236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.795356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.795381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-23 18:22:49.795530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-23 18:22:49.795555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.795690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.795716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.795848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.795873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.796879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.796983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.797877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.797902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.798859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.798884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.799862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.799887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-23 18:22:49.800737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-23 18:22:49.800862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.800887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.801917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.801943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.802943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.802968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.803924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.803949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.804896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.804921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.805895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-23 18:22:49.805920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-23 18:22:49.806020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.806859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.806979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.807854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.807879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.808937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.808962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.809921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.809946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-23 18:22:49.810775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-23 18:22:49.810801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.810891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.811865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.811890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.812848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.812873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.813934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.813960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.814866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.814982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.815007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.815102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.815127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.815245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.815271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.815365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.815391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-23 18:22:49.815527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-23 18:22:49.815553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.815676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.815701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.815820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.815845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.815932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.816914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.816939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.817966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.817991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.818854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.818976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.819926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.819952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.820099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.820124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.820246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.820271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.820424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-23 18:22:49.820450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-23 18:22:49.820547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.820573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.820707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.820732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.820854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.820880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.820973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.820998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.821872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.821897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.822862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.822888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.823916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.823941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.824964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.824990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-23 18:22:49.825765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-23 18:22:49.825791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.825913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.825939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.826913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.826939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.827913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.827939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.828969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.828994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.829935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.829962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-23 18:22:49.830802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-23 18:22:49.830827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.830923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.830949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.831859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.831885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.832866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.832991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.833861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.833886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.834904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.834929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.835033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.835058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.835155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-23 18:22:49.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-23 18:22:49.835268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.835394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.835544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.835662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.835805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.835934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.835959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.836967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.836999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.837891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.837916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.838925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.838950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.839875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.839999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.840963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-23 18:22:49.840988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-23 18:22:49.841111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.841264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.841414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.841599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.841713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.841853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.841878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.842876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.842901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.843904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.843998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.844918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.844944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.845865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.845890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.846039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.846068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.846194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.846225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-23 18:22:49.846347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-23 18:22:49.846373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.846491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.846517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.846635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.846660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.846804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.846830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.846947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.846978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.847852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.847976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.848945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.848970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.849900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.850966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.850993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.851115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.851140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.851260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.851286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.851387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.851413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.851509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.851536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-23 18:22:49.851656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-23 18:22:49.851681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.851823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.851848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.851941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.851968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.852964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.852990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.853925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.853951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.854919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.854946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.855886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.855911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.856834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.856985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.857010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.857132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.857157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.857292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.857324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.857409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.857434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-23 18:22:49.857588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-23 18:22:49.857613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.857725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.857751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.857863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.857888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.857986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.858882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.858908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.859810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.859836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.860856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.860979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.861869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.861997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.862023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.862152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.862178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.862303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-23 18:22:49.862341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-23 18:22:49.862436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.862461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.862584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.862609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.862692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.862717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.862840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.862868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.862989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.863874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.863899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.864945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.864970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.865861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.865982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.866989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.867282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.867452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.867619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-23 18:22:49.867763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-23 18:22:49.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.867879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.867905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.868928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.868954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.869901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.869926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.870858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.870967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.871905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.871930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.872963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.872988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.873109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.873301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.873471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.873644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-23 18:22:49.873764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-23 18:22:49.873912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.873943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.874957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.874982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.875935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.875961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.876902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.876927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.877893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.877918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.878957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.878982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-23 18:22:49.879906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-23 18:22:49.879931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.880856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.880979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.881902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.881929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.882957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.882984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.883833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.883991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.884905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.884930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.885078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.885103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.885219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.885244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-23 18:22:49.885385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-23 18:22:49.885416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.885517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.885542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.885697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.885723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.885820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.885848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.885992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.886912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.886939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.887968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.887999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.888922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.888947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.889929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.889955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.890898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.890924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.891015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.891046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.891170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.891202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.891313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.891347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.891495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-23 18:22:49.891520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-23 18:22:49.891614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.891640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.891763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.891789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.891916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.891942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.892835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.892986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.893892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.893918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.894868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.894992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.895854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.895879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.896925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.896956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-23 18:22:49.897738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-23 18:22:49.897860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.897885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.897999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.898947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.898975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.899973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.899999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.900929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.900954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.901909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.901934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.902957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.902982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.903890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.903916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.904060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-23 18:22:49.904086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-23 18:22:49.904205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.904346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.904463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.904605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.904724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.904875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.904901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.905947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.906857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.906975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.907962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.907988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.908836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.908861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-23 18:22:49.909799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-23 18:22:49.909896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.909921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.910834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.910860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.911953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.911978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.912935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.912961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.913912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.913943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.914903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.914928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.915962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.915988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.916147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.916186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-23 18:22:49.916323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-23 18:22:49.916352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.916475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.916501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.916596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.916622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.916735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.916760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.916849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.916876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.917912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.917937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.918960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.918985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.919857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.919981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.920896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.920985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.921870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.921896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.922934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.922959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.923078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-23 18:22:49.923105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-23 18:22:49.923249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.923370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.923527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.923676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.923796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.923905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.923930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.924896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.924922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.925913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.925939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.926955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.926980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.927900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.927926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.928873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.928986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.929016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.929110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.929136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.929254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.929280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.929429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.929454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.929576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-23 18:22:49.929601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-23 18:22:49.929698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.929724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.929844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.929870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.929993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.930875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.930900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.931873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.931899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.932950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.932975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.933944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.933969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.934850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.934877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-23 18:22:49.935820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-23 18:22:49.935968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.935994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.936920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.936946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.937889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.937980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.938865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.938984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.939974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.939999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.940871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.940993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.941879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.941998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-23 18:22:49.942725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-23 18:22:49.942871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.942897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.943909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.943936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.944904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.944930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.945899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.945923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.946931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.946955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.947895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.947920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.948935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.948960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.949052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.949077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.949195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.949335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.949361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.949463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.949487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-23 18:22:49.949634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-23 18:22:49.949659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.949758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.949782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.949874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.949900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.950976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.951933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.951960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.952891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.952916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.953876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.953978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.954904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.954929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.955845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.955870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.956001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-23 18:22:49.956029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-23 18:22:49.956150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.956964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.956990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.957944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.957969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.958929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.958955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.959928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.959954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.960837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.960983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.961878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.961905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.962911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.962993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.963019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.963112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.963137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.963222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.963247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.963358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-23 18:22:49.963384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-23 18:22:49.963477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.963504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.963626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.963651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.963798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.963823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.963912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.963938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.964864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.964983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.965873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.965986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.966943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.966970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.967951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.967977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.968925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.968950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.969933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.969960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.970868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-23 18:22:49.970893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-23 18:22:49.971005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.971881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.971906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.972896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.972997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.973896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.973922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.974939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.974964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.975962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.975987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.976878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.976905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.977892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.977918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.978039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.978065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.978184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.978209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.978343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.978369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.978468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-23 18:22:49.978494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-23 18:22:49.978640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.978666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.978764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.978790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.978906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.978932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.979921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.979946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.980833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.980978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.981858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.981883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.982961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.982987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.983865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.983985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.984911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.984936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.985054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.985086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-23 18:22:49.985208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-23 18:22:49.985233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.985344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.985370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.985463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.985489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.985588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.985614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.985732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.985758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.985877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.985903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.986889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.986990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.987862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.987983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.988869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.988894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.989970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.989995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.990965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.990990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.991938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.991963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.992904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.992929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.993924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.993949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.994898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.994923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.995863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.995889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.996896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.996921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.997059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.997181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.997334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.997480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-23 18:22:49.997651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-23 18:22:49.997799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.997824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.997943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.997968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.998921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.998946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:49.999868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:49.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.000915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.000952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.001896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.001981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.002882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.002908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.003876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.003902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.004841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.004869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.005917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.005948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.006944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.006969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.007967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.007993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.008895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-23 18:22:50.008989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-23 18:22:50.009017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.009964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.009990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.010969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.010995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.011832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.011857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.012880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.012906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.013902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.013928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.014846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.014983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.015892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.015918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.016970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.016996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.017906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.018966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.018992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.019876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.019999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.020929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.020954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.021043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.021070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.021190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.021219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.021311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-23 18:22:50.021344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-23 18:22:50.021438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.021579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.021605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.021725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.021751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.021875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.021901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.022951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.022979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.023903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.023929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.024835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.024983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.025942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.025968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.026883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.026909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.027958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.027983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.028928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.029871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.029896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.030872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.030996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.031882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.031909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.032872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.032898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.033019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.033045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-23 18:22:50.033130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-23 18:22:50.033156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.033958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.033984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.034905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.034943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.035946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.035971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.036062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.036086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.036204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.036231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.036359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.036389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.036501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.036539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-23 18:22:50.036649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-23 18:22:50.036679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.036791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.036821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.036914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.037943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.037976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.038107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.038139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.038277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.038322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.038457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.038505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.038657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.038707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.038851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.038893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.724 qpair failed and we were unable to recover it. 00:34:42.724 [2024-07-23 18:22:50.039059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.724 [2024-07-23 18:22:50.039101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.039258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.039310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.039503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.039680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.039715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.039833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.039872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.039989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.040963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.040989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.041962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.041989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.042887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.042916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.043908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.043934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.044080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.044204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.044370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.044529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.725 [2024-07-23 18:22:50.044645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.725 qpair failed and we were unable to recover it. 00:34:42.725 [2024-07-23 18:22:50.044767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.044794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.044889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.044916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.045910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.045935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.046852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.046984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.047849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.047876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.048837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.048992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.049848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.049875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.050001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.050030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.726 [2024-07-23 18:22:50.050158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.726 [2024-07-23 18:22:50.050183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.726 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.050302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.050333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.050434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.050459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.050606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.050630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.050723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.050747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.050869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.050893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.051935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.051960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.052861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.052886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.053930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.053955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.054958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.054984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.055110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.055136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.055236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.055264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.055364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.055390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.055539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.727 [2024-07-23 18:22:50.055564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.727 qpair failed and we were unable to recover it. 00:34:42.727 [2024-07-23 18:22:50.055685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.055710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.055828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.055853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.055956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.056917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.056941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.057933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.057959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.058861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.058886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.059862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.059887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.060869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.728 qpair failed and we were unable to recover it. 00:34:42.728 [2024-07-23 18:22:50.060982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.728 [2024-07-23 18:22:50.061008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.061829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.061975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.062866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.062893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.063833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.063859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.064912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.064937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.065058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.065083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.065178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.065203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.729 [2024-07-23 18:22:50.065339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.729 [2024-07-23 18:22:50.065366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.729 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.065460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.065487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.065602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.065627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.065749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.065775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.065869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.065895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.066972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.066998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.067885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.067911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.068828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.069874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.069900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.730 [2024-07-23 18:22:50.070705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.730 [2024-07-23 18:22:50.070731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.730 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.070852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.070877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.070962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.070989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.071898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.071923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.072853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.072885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.073942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.073968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.074852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.074877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.731 [2024-07-23 18:22:50.075859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.731 [2024-07-23 18:22:50.075884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.731 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.075986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.076911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.076937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.077924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.077950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.078915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.078941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.079831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.079982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.080849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.080997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.081022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.081146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.081173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.081270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.081295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.732 [2024-07-23 18:22:50.081461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.732 [2024-07-23 18:22:50.081500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.732 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.081601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.081630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.081754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.081781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.081873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.081899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.082862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.082989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.083942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.083968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.084909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.084934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.085882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.085908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.086037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.086062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.086176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.086201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.733 [2024-07-23 18:22:50.086353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.733 [2024-07-23 18:22:50.086381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.733 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.086536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.086563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.086658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.086682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.086829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.086855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.086976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.087875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.087901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.088974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.088999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.089871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.089995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.090957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.090983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.734 qpair failed and we were unable to recover it. 00:34:42.734 [2024-07-23 18:22:50.091857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.734 [2024-07-23 18:22:50.091882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.092867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.092893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.093896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.093923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.094876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.094973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.095931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.095961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.096892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.096917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.097063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.735 [2024-07-23 18:22:50.097089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.735 qpair failed and we were unable to recover it. 00:34:42.735 [2024-07-23 18:22:50.097182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.097894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.097996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.098906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.098933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.099909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.099934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.100875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.100902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.101927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.102069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.102095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.102218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.102243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.102390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.102415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.736 [2024-07-23 18:22:50.102560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.736 [2024-07-23 18:22:50.102586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.736 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.102718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.102743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.102865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.102891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.102986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.103973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.103998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.104921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.104947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.105884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.105912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.106891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.106917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.107870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.107897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.737 [2024-07-23 18:22:50.108005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.737 [2024-07-23 18:22:50.108031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.737 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.108907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.108939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.109877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.109998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.110961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.110986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.111903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.111994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.112882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.738 [2024-07-23 18:22:50.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.738 qpair failed and we were unable to recover it. 00:34:42.738 [2024-07-23 18:22:50.113004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.113915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.113946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.114830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.114977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.115909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.115935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.116912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.116938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.117887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.117912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.118065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.118189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.118216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.118312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.118345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.118441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.739 [2024-07-23 18:22:50.118467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.739 qpair failed and we were unable to recover it. 00:34:42.739 [2024-07-23 18:22:50.118570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.118596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.118722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.118747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.118864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.118889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.118985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.119892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.119917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.120890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.120915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.121065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.121212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.121342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.121708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.121860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.121986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.122847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.122987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.123946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.123970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.740 [2024-07-23 18:22:50.124092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.740 [2024-07-23 18:22:50.124116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.740 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.124226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.124374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.124516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.124691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.124840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.124989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.125951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.125975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.126951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.126980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.127892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.127918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.128917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.128945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.129072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.129099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.129220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.129245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.129377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.129403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.741 [2024-07-23 18:22:50.129501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.741 [2024-07-23 18:22:50.129527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.741 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.129660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.129686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.129807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.129832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.129956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.129982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.130903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.130929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.131942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.131968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.132872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.132987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.133889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.133915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.134061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.134086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.134184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.134210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.134300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.134335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.134455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.134481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.742 qpair failed and we were unable to recover it. 00:34:42.742 [2024-07-23 18:22:50.134576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.742 [2024-07-23 18:22:50.134601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.134719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.134744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.134872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.134896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.134994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.135887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.135913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.136902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.136990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.137910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.137935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.138883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.138981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.139008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.139130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.139155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.139269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.139294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.139402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.139427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.743 [2024-07-23 18:22:50.139522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.743 [2024-07-23 18:22:50.139547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.743 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.139648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.139793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.139819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.139934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.139959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.140913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.140941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.141871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.141897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.142857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.142988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.143971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.143997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.744 [2024-07-23 18:22:50.144899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.744 [2024-07-23 18:22:50.144925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.744 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.145936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.146948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.146975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.147880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.147906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.148900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.148928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.149933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.149959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.150079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.150104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.150194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.150221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.745 [2024-07-23 18:22:50.150369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.745 [2024-07-23 18:22:50.150408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.745 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.150537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.150565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.150683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.150709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.150815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.150846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.150968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.150994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.151878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.151904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.152924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.152949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.153881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.154904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.154994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.155142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.155258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.155412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.155558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.746 [2024-07-23 18:22:50.155708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.746 [2024-07-23 18:22:50.155733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.746 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.155857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.155882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.156914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.156940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.157940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.158901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.158939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.159865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.159892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.160029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.160147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.160300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.160435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.747 [2024-07-23 18:22:50.160606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.747 qpair failed and we were unable to recover it. 00:34:42.747 [2024-07-23 18:22:50.160725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.160751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.160868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.160894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.160987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.161880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.161977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.162850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.162998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.163898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.163923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.164871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.164992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.165888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.748 qpair failed and we were unable to recover it. 00:34:42.748 [2024-07-23 18:22:50.165981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.748 [2024-07-23 18:22:50.166006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.166861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.166887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.167872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.167996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.168948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.168974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.169951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.169977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.170938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.170966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.171067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.171092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.171187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.749 [2024-07-23 18:22:50.171213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.749 qpair failed and we were unable to recover it. 00:34:42.749 [2024-07-23 18:22:50.171302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.171427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.171452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.171568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.171593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.171710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.171735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.171888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.171913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.172880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.172905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.173862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.173888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.174964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.174990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.175863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.175888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.176013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.176038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.176193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.176218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.176383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.176423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.176529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.750 [2024-07-23 18:22:50.176555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.750 qpair failed and we were unable to recover it. 00:34:42.750 [2024-07-23 18:22:50.176647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.176673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.176768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.176794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.176909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.176934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.177915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.177939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.178877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.178904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.179861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.179886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.180904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.180930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.751 [2024-07-23 18:22:50.181946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.751 [2024-07-23 18:22:50.181971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.751 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.182927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.182952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.183894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.183919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.184902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.184927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.185882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.185907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.186027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.186053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.186175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.186199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.186294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.186328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.186415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.186440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.752 [2024-07-23 18:22:50.186558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.752 [2024-07-23 18:22:50.186583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.752 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.186706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.186730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.186852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.186878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.187864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.187889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.188939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.188965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.189872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.189897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.190869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.190996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-23 18:22:50.191703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-23 18:22:50.191803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.191828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.191943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.191968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.192951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.192975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.193939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.193963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.194934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.194958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.195974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.195998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.196966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-23 18:22:50.196991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-23 18:22:50.197104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.197937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.197963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.198890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.198915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.199972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.199996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.200963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.200988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.201837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.201863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.202008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.202033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.202155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.202180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.202299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-23 18:22:50.202334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-23 18:22:50.202434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.202460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.202560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.202584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.202684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.202708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.202834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.202859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.202982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.203946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.204890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.204915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.205865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.205889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.206859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.206979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-23 18:22:50.207004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-23 18:22:50.207127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.207952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.207977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.208885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.208910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.209964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.210973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.210998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.211937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.211961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.212094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.212119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.212235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.212259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.212352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-23 18:22:50.212379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-23 18:22:50.212491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.212516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.212639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.212664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.212759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.212785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.212906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.212932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.213890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.213915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.214937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.214963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.215881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.215905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.216888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.216912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-23 18:22:50.217735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-23 18:22:50.217761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.217879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.217904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.218865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.218890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.219968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.219994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.220926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.220952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.221915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.221940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.222908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-23 18:22:50.222991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-23 18:22:50.223016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.223850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.223876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.224916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.224940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.225904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.225929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.226824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.226980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.227859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.227885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.228006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.228033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.228157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.228182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.228279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.228303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.228460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.228487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-23 18:22:50.228636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-23 18:22:50.228661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.228780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.228806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.228929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.228955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.229825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.229850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.230857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.230882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.231880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.231906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.232931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.232956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.233072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.233103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.233206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.233232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.233357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.233385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.233588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-23 18:22:50.233615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-23 18:22:50.233757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.233782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.233982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.234915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.234940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.235782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.235807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.236928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.236954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.237958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.237983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.238936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.238960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.239076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.239234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.239462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-23 18:22:50.239786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-23 18:22:50.239904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.239929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.240863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.240889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.241860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.241885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.242950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.242976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.243846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.243994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.244878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.244903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.245026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.245051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.245165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.245190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.245339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.245377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.245508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-23 18:22:50.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-23 18:22:50.245683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.245715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.245840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.245865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.246887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.246915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.247885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.247910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.248876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.248904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.249797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.249997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.250947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.250973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.251065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-23 18:22:50.251091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-23 18:22:50.251205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.251351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.251577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.251724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.251838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.251946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.251972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.252836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.252979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.253897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.253922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.254935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.254963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-23 18:22:50.255720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-23 18:22:50.255844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.255869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.255965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.255990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.256834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.256982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.257958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.257990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.258869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.258989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.259941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.259967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.260906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.260931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.261050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.261076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-23 18:22:50.261220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-23 18:22:50.261250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.261337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.261363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.261453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.261569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.261594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.261741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.261767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.261889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.261915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.262892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.262988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.263848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.263990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.264860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.265890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.265915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-23 18:22:50.266930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-23 18:22:50.266960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.267858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.268915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.268939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.269835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.269980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.270889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.270914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.271909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.271934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-23 18:22:50.272790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-23 18:22:50.272913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.272938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.273877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.273903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.274865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.274892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.275920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.275947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.276875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.276901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.277887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.277913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.278062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.278087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.278195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.278221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.278364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.278395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.278510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.278536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-23 18:22:50.278656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-23 18:22:50.278683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.278770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.278798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.278916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.278942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.279909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.279936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.280920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.280946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.281888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.281919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.282094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.282367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.282548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.282700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-23 18:22:50.282871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-23 18:22:50.282988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.283931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.283956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.284935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.284961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.285839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.285985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.286971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.287178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.287331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.287491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.287637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-23 18:22:50.287784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-23 18:22:50.287886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.287911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.288917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.288943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.289895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.289921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.290883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.290908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.291918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.291943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.292066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.292254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.292283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.292425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.292452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-23 18:22:50.292571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-23 18:22:50.292595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.292715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.292740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.292857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.292882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.292980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.293973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.293999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.294826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.294851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.295908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.295933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.296944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.296969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.297114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.297140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.297286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.297311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.297423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.297449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.297544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-23 18:22:50.297570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-23 18:22:50.297713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.297738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.297859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.297884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.298868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.298988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.299897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.299922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.300922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.300947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.301872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.301897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-23 18:22:50.302730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-23 18:22:50.302845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.302870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.302989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.303850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.303996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.304886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.304990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.305847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.305990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.306873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-23 18:22:50.306898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-23 18:22:50.307027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.307892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.307995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.308889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.308915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.309901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.309927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.310948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.310975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.311889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.311916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-23 18:22:50.312041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-23 18:22:50.312068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.312961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.312987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.313964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.313990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.314135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.314161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.314273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.314298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.314419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.314445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.314590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.314616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.314817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.314843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.315876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.315996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.316939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.316964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.317088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.317199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.317224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.317354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.317379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.317518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-23 18:22:50.317557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-23 18:22:50.317658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.317685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.317808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.317833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.317949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.317974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.318894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.318922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.319874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.319898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.320985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.321106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.321231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.321347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.321493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.321748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.321773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-23 18:22:50.322713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-23 18:22:50.322738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.322858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.322884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.322976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.323857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.323975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.324926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.324951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.325869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.325894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.326940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.326972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-23 18:22:50.327749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-23 18:22:50.327774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.327895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.328912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.328940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.329918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.329944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.330933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.330959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.331915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.331943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.332064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.332091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.332187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.332213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-23 18:22:50.332298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-23 18:22:50.332328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.332456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.332481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.332573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.332603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.332727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.332753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.332846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.332873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.332962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.332987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.333929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.333953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.334893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.334920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.335933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.336921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-23 18:22:50.336946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-23 18:22:50.337066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.337905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.337933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.338955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.338981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.339885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.339983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.340926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.340954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.341850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.341875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-23 18:22:50.342033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-23 18:22:50.342058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.342964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.342988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.343885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.343917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.344903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.344931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.345861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.345886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.346883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.346973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.347000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.347116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.347141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.347256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.347282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-23 18:22:50.347435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-23 18:22:50.347461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.347607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.347631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.347750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.347893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.347918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.348897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.348922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.349872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.349989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.350972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.350996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.351848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.351873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.352022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.352047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-23 18:22:50.352160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-23 18:22:50.352186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.352279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.352431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.352459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.352556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.352582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.352736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.352761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.352877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.352902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.353913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.353937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.354931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.354954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.355879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.355904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.356021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.356045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.356162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.356186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-23 18:22:50.356306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-23 18:22:50.356336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.356456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.356481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.356607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.356633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.356746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.356770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.357913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.357938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.358901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.358998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.359898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.359995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.360933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.360957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.361056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.361085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.361205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.361232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.361328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.361354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.361477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-23 18:22:50.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-23 18:22:50.361654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-23 18:22:50.361680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-23 18:22:50.361778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-23 18:22:50.361803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.361925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.361951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.362932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.362958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.363915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.363940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.364857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.364886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.365031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.365057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.365150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.365177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.365299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.365331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.065 [2024-07-23 18:22:50.365427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.065 [2024-07-23 18:22:50.365454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.065 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.365579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.365605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.365689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.365715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.365800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.365827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.365916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.365942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.366963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.366990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.367865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.367894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.368830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.368856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.369892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.369922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.370021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.370047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.370139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.370166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.370269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.370295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.370407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.066 [2024-07-23 18:22:50.370434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.066 qpair failed and we were unable to recover it. 00:34:43.066 [2024-07-23 18:22:50.370540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.370568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.370652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.370678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.370766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.370792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.370883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.370916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.371959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.371986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.372851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.372978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.373923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.373949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.374968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.374994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.375088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.375113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.067 [2024-07-23 18:22:50.375201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.067 [2024-07-23 18:22:50.375231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.067 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.375358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.375384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.375475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.375501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.375603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.375629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.375724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.375750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.375879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.375905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.376873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.376900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.377918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.377943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.378966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.378991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.379967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.379993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.380148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.068 [2024-07-23 18:22:50.380183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.068 qpair failed and we were unable to recover it. 00:34:43.068 [2024-07-23 18:22:50.380392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.380420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.380544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.380570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.380691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.380717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.380815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.380841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.380959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.381851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.381877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.382963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.382988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.383958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.383983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.384854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.384893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.385024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.385051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.385175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.385203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.069 [2024-07-23 18:22:50.385327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.069 [2024-07-23 18:22:50.385357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.069 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.385518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.385544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.385668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.385694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.385781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.385807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.385919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.385958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.386935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.386960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.387881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.387975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.388946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.388971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.389117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.389142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.389275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.389313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.070 [2024-07-23 18:22:50.389430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.070 [2024-07-23 18:22:50.389458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.070 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.389612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.389638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.389765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.389792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.389912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.389945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.390890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.390916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.391928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.391953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.392949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.392974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.393952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.393977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.394126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.394271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.394415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.394564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.071 [2024-07-23 18:22:50.394729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.071 qpair failed and we were unable to recover it. 00:34:43.071 [2024-07-23 18:22:50.394825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.394851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.394971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.394996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.395850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.395875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.396968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.396993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.397920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.397946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.398974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.398999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.399859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.399884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.400000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.072 [2024-07-23 18:22:50.400025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.072 qpair failed and we were unable to recover it. 00:34:43.072 [2024-07-23 18:22:50.400127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.400300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.400444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.400595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.400716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.400867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.401877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.401904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.402875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.402900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.403920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.403945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.404855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.404983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.405008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.405156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.405182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.405301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.073 [2024-07-23 18:22:50.405331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.073 qpair failed and we were unable to recover it. 00:34:43.073 [2024-07-23 18:22:50.405456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.405481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.405575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.405721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.405746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.405864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.405890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.405985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.406931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.406956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.407898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.407924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.408867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.408893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.074 [2024-07-23 18:22:50.409737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.074 qpair failed and we were unable to recover it. 00:34:43.074 [2024-07-23 18:22:50.409889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.409915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.410904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.410929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.411918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.411943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.412863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.412889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.413970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.413997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.414934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.414960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.075 [2024-07-23 18:22:50.415081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.075 [2024-07-23 18:22:50.415107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.075 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.415938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.415964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.416964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.416989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.417875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.417902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.418876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.418902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.419887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.419981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.420009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.076 [2024-07-23 18:22:50.420137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.076 [2024-07-23 18:22:50.420163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.076 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.420949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.420975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.421875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.421902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.422882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.422907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.423952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.423978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.424948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.424974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.425074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.425106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.425222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.425247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.077 [2024-07-23 18:22:50.425343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.077 [2024-07-23 18:22:50.425369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.077 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.425492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.425518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.425641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.425667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.425817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.425843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.425956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.425981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.426933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.426959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.427904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.427991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.428953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.428980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.429916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.429942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.430114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.430261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.430415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.430543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.078 [2024-07-23 18:22:50.430661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.078 qpair failed and we were unable to recover it. 00:34:43.078 [2024-07-23 18:22:50.430760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.430787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.430893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.430919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.431789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.431991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.432973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.432999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.433885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.433910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.434918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.434943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.435062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.435089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.435211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.435237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.435353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.435380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.435504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.435529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.079 qpair failed and we were unable to recover it. 00:34:43.079 [2024-07-23 18:22:50.435618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.079 [2024-07-23 18:22:50.435644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.435769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.435797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.435925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.435951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.436936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.437945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.437971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.438931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.438957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.439898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.439925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.440051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.440078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.440188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.440226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.440352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.440382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.440522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.080 [2024-07-23 18:22:50.440548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.080 qpair failed and we were unable to recover it. 00:34:43.080 [2024-07-23 18:22:50.440635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.440661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.440782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.440808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.440931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.440957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.441966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.441992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.442950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.442980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.443969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.443995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.444905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.444931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.445871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.445896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.081 qpair failed and we were unable to recover it. 00:34:43.081 [2024-07-23 18:22:50.446017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.081 [2024-07-23 18:22:50.446043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.446895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.446923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.447908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.447936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.448865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.448982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.449959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.449985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.450938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.450962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.451083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.082 [2024-07-23 18:22:50.451109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.082 qpair failed and we were unable to recover it. 00:34:43.082 [2024-07-23 18:22:50.451203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.451353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.451501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.451644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.451786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.451952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.451977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.452860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.452977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.453868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.453893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.454828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.454856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.455874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.455900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.456022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.456047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.456171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.456196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.456328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.456354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.456469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.456494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.083 qpair failed and we were unable to recover it. 00:34:43.083 [2024-07-23 18:22:50.456583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.083 [2024-07-23 18:22:50.456608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.456706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.456733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.456853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.456878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.457960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.457985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.458879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.458998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.459876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.459993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.084 qpair failed and we were unable to recover it. 00:34:43.084 [2024-07-23 18:22:50.460771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.084 [2024-07-23 18:22:50.460796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.460941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.460966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.461848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.461880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.462886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.462912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.463901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.463929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.464905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.464999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.465889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.465918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.466008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.085 [2024-07-23 18:22:50.466034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.085 qpair failed and we were unable to recover it. 00:34:43.085 [2024-07-23 18:22:50.466124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.466932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.466957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.467971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.467996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.468900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.468925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.469900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.470925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.470950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.471069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.086 [2024-07-23 18:22:50.471097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.086 qpair failed and we were unable to recover it. 00:34:43.086 [2024-07-23 18:22:50.471220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.471370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.471493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.471670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.471815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.471934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.471962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.472899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.472924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.473934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.473960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.474874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.474899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.475857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.475984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.476011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.476136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.476161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.476313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.476345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.476460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.476486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.476607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.087 [2024-07-23 18:22:50.476632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.087 qpair failed and we were unable to recover it. 00:34:43.087 [2024-07-23 18:22:50.476758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.476783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.476873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.476898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.476987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.477945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.477970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.478887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.478913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.479875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.480954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.480985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.481086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.481112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.481210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.481235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.481330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.481356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.088 [2024-07-23 18:22:50.481477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.088 [2024-07-23 18:22:50.481502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.088 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.481646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.481672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.481830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.481855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.481938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.481963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.482907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.482995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.483866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.483892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.484901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.484925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.089 [2024-07-23 18:22:50.485734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.089 qpair failed and we were unable to recover it. 00:34:43.089 [2024-07-23 18:22:50.485825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.485856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.485944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.485970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.486869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.486993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.487924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.487950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.488908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.488934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.489951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.489979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-23 18:22:50.490905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-23 18:22:50.490995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.491882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.491907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.492944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.492970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.493892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.493917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.494843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.494870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.495872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-23 18:22:50.495899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-23 18:22:50.496018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.496969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.496995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.497900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.498852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.498997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.499908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.499934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-23 18:22:50.500836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-23 18:22:50.500861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.500983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.501861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.501886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.502858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.502883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.503882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.503979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.504887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.504912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-23 18:22:50.505864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-23 18:22:50.505958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.505985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.506917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.506945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.507947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.507973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.508874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.508913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.509897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.509922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.510044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.510069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-23 18:22:50.510196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-23 18:22:50.510224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.510371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.510398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.510495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.510522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.510637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.510662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.510764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.510790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.510875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.510901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.511860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.511886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.512890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.512915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.513923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.513949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.514897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.515068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.515093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.515211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.515237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.515373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.515399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-23 18:22:50.515514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-23 18:22:50.515540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.515656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.515681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.515775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.515802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.515949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.515975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.516820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.516976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.517848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.517993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.518862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.518887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.519829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.519856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-23 18:22:50.520954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-23 18:22:50.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.521907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.521933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.522954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.522979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.523936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.523962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.524890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.524915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-23 18:22:50.525961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-23 18:22:50.525990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.526887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.526980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.527866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.527893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.528919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.528944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.529963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.529989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.530105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-23 18:22:50.530131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-23 18:22:50.530250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.530385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.530499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.530639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.530747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.530862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.531993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.532982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.533921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.533946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.534913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.534938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.535056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.535081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-23 18:22:50.535202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-23 18:22:50.535228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.535363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.535403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.535503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.535531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.535621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.535647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.535766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.535792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.535939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.536067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.536094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.536219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.536246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.536372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.536400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.536545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.536570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.536667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.536693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.537621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.537654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.537810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.537837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.537983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.538882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.538978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.539967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.539993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.540865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.540983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.541008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-23 18:22:50.541129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-23 18:22:50.541154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.541276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.541301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.541472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.541498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.541623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.541648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.541766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.541790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.541889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.541916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.542883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.542909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.543940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.543969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.544860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.545870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.545993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.546018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.546109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.546134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.546253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.546278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-23 18:22:50.546370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-23 18:22:50.546396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.546517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.546543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.546666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.546691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.546785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.546812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.546934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.547926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.547950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.548918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.548944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.549832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.549859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-23 18:22:50.550971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-23 18:22:50.550996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.551948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.551975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.552848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.552874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.553971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.553998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.554145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.554289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.554455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.554574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-23 18:22:50.554703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-23 18:22:50.554821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.554966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.554992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.555949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.555974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.556891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.556916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.557930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.557955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.558859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-23 18:22:50.559709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-23 18:22:50.559735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.559855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.559880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.560969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.560994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.561883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.561909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.562911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.562937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.563914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.563939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.564929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.564954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-23 18:22:50.565049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-23 18:22:50.565075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.565884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.565910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.566873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.566900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.567888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.567914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.568949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.568975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.569912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.569942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.570059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.570084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.570232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.570342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.570368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-23 18:22:50.570451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-23 18:22:50.570476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.570603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.570628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.570748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.570773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.570872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.570899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.570993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.571945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.571971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.572945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.572972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.573889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.573985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.574857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.574882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.575840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.575865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.576013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.576038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.576181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-23 18:22:50.576206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-23 18:22:50.576303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.576333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.576491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.576516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.576637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.576662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.576759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.576784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.576902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.576927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.577926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.577952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.578939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.578966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.579959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.579984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.580847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-23 18:22:50.580980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-23 18:22:50.581006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.581910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.581935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.582871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.582896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.583917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.583943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.584901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.584999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.585911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.585938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.586934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-23 18:22:50.586960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-23 18:22:50.587081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.587942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.587968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.588873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.588992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.589968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.589993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.590947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.590972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.591947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.591973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.592935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.592961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.593061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.593086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.593204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.110 [2024-07-23 18:22:50.593229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.110 qpair failed and we were unable to recover it. 00:34:43.110 [2024-07-23 18:22:50.593345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.593371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.593465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.593490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.593572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.593597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.593715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.593740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.593861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.593887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.593975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.594952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.594978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.595971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.595997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.596837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.596985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.597903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.597991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.598886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.598912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.111 [2024-07-23 18:22:50.599000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.111 [2024-07-23 18:22:50.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.111 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.599926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.599951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.600865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.600997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.601837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.601863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.602876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.602902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.603969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.603994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.604120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.604146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.604295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.112 [2024-07-23 18:22:50.604329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-23 18:22:50.604472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.604498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.604612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.604638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.604801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.604827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.604944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.604970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.605924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.605949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.606904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.606929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.113 [2024-07-23 18:22:50.607798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-23 18:22:50.607896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.607921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.608936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.608962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.609892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.609917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.610960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.610994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.611892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.611980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.612007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.612125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.114 [2024-07-23 18:22:50.612152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-23 18:22:50.612275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.612301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.612395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.612421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.612541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.612567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.612689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.612714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.612836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.612861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.612978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.613902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.613928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.614933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.614963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.615896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.615923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.115 qpair failed and we were unable to recover it. 00:34:43.115 [2024-07-23 18:22:50.616856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.115 [2024-07-23 18:22:50.616884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.617901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.617926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.618888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.618913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.619884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.619909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.620893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.620983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.621008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.621099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.621124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.621242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.116 [2024-07-23 18:22:50.621269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.116 qpair failed and we were unable to recover it. 00:34:43.116 [2024-07-23 18:22:50.621363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.621390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.621513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.621539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.621651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.621684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.621782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.621807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.621933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.621958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.622869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.622988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.623839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.623989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.624861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.624986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.625012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.625126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.625152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.625251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.625276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.625408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.625434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.117 qpair failed and we were unable to recover it. 00:34:43.117 [2024-07-23 18:22:50.625524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.117 [2024-07-23 18:22:50.625550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.625664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.625689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.625793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.625819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.625975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.626969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.626995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.627850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.627974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.628965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.628991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.629107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.629133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.629249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.629275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.629382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.629409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.118 [2024-07-23 18:22:50.629533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.118 [2024-07-23 18:22:50.629559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.118 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.629641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.629667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.629754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.629779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.629873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.629903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.630884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.630909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.631965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.631990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.632910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.632935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.633842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.633971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.634006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.119 [2024-07-23 18:22:50.634116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.119 [2024-07-23 18:22:50.634149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.119 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.634348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.634499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.634638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.634771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.634887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.634979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.635871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.635896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.636894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.636919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.637872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.637988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.120 [2024-07-23 18:22:50.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.120 qpair failed and we were unable to recover it. 00:34:43.120 [2024-07-23 18:22:50.638860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.638885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.638992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.639948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.639973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.640062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.640087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.640822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.640853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.641866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.641891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.642903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.642929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.643927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.643952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.644038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.644064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.121 qpair failed and we were unable to recover it. 00:34:43.121 [2024-07-23 18:22:50.644183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.121 [2024-07-23 18:22:50.644210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.644302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.644334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.644460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.644486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.644576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.644602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.644721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.644747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.644842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.644869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.645903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.645943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.646896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.646922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.647882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.647909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.122 [2024-07-23 18:22:50.648839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.122 [2024-07-23 18:22:50.648864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.122 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.648964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.648990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.649946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.649972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.650064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.650090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.650185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.123 [2024-07-23 18:22:50.650211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.123 qpair failed and we were unable to recover it. 00:34:43.123 [2024-07-23 18:22:50.650333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.650359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.650451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.650477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.650568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.650601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.650690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.650716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.650868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.650894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.650991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.651883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.651910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.652871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.652897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.653951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.653977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.654943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.654969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.655121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.655147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.655247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.655281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.655426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.124 [2024-07-23 18:22:50.655455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.124 qpair failed and we were unable to recover it. 00:34:43.124 [2024-07-23 18:22:50.655548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.655575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.655701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.655727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.655850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.655876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.656924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.656951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.657953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.657979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.658907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.658932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.659897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.659923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.660043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.660069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.660170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.660197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.660321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.660347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.660459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.660485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.125 qpair failed and we were unable to recover it. 00:34:43.125 [2024-07-23 18:22:50.660587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.125 [2024-07-23 18:22:50.660613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.660731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.660756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.660907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.660932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.661883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.661910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.662889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.662915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.663857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.663884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.664970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.664995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.126 [2024-07-23 18:22:50.665824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.126 qpair failed and we were unable to recover it. 00:34:43.126 [2024-07-23 18:22:50.665945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.665970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.666871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.666897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.667874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.667976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.668968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.668995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.669951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.670879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.670998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.671024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.671147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.127 [2024-07-23 18:22:50.671173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.127 qpair failed and we were unable to recover it. 00:34:43.127 [2024-07-23 18:22:50.671272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.671408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.671525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.671677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.671792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.671922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.671948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.672933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.672959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.673901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.673998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.674968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.674994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.675083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.675110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.675229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.675255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.675351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.675378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.675480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.675506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.128 [2024-07-23 18:22:50.675630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.128 [2024-07-23 18:22:50.675655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.128 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.675762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.675788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.675909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.675934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.676924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.676950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.677963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.677989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.678941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.678969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.679901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.679927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.129 qpair failed and we were unable to recover it. 00:34:43.129 [2024-07-23 18:22:50.680931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.129 [2024-07-23 18:22:50.680957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.681951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.681979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.682894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.682986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.683904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.683932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.684888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.684919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.685958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.685983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.686107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.130 [2024-07-23 18:22:50.686134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.130 qpair failed and we were unable to recover it. 00:34:43.130 [2024-07-23 18:22:50.686282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.686409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.686526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.686693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.686845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.686967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.686992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.687960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.687985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.688861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.688991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.689898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.689925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.690852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.690884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.691040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.691199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.691354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.691510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.131 [2024-07-23 18:22:50.691645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.131 qpair failed and we were unable to recover it. 00:34:43.131 [2024-07-23 18:22:50.691772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.691797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.691884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.691910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.692886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.692981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.693867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.693991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.694970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.694995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.695927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.695953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.696870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.696896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.697015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.132 [2024-07-23 18:22:50.697041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.132 qpair failed and we were unable to recover it. 00:34:43.132 [2024-07-23 18:22:50.697162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.697286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.697450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.697566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.697721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.697870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.697895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.698921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.698947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.699887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.699914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.700931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.700958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.701111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.701260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.701405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.701559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.133 [2024-07-23 18:22:50.701728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.133 qpair failed and we were unable to recover it. 00:34:43.133 [2024-07-23 18:22:50.701825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.701851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.701972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.701997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.702861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.702886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.703009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.703036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.703130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.703157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.703251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.134 [2024-07-23 18:22:50.703278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.134 qpair failed and we were unable to recover it. 00:34:43.134 [2024-07-23 18:22:50.703421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.703448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.703603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.703629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.703729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.703756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.703857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.703884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.704904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.704930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.705945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.705971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.706902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.706927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.707086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.707112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.707232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.707258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.707410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.707436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.707540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.707565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.412 [2024-07-23 18:22:50.707692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.412 [2024-07-23 18:22:50.707719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.412 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.707835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.707860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.707960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.707985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.708859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.708884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.709905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.709931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.710918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.710944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.711891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.711922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.413 [2024-07-23 18:22:50.712812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.413 qpair failed and we were unable to recover it. 00:34:43.413 [2024-07-23 18:22:50.712928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.712953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.713874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.713993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.714902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.714928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.715914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.715940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.716871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.716993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.717909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.717934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.718026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.718052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.414 qpair failed and we were unable to recover it. 00:34:43.414 [2024-07-23 18:22:50.718169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.414 [2024-07-23 18:22:50.718195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.718322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.718349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.718469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.718494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.718593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.718619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.718713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.718738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.718862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.718887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.719881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.719999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.720964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.720989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.721900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.721925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.722874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.722995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.723020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.723112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.723138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.723259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.415 [2024-07-23 18:22:50.723290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.415 qpair failed and we were unable to recover it. 00:34:43.415 [2024-07-23 18:22:50.723392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.723419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.723545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.723572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.723666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.723692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.723786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.723811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.723909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.723934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.724909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.724938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.725890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.725916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.726904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.726999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.727027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.727176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.727202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.727321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.727348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.727437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.727463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.727549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.416 [2024-07-23 18:22:50.727575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.416 qpair failed and we were unable to recover it. 00:34:43.416 [2024-07-23 18:22:50.727668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.727693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.727838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.727864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.727993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.728908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.728999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.729891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.729918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.730952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.730978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.731900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.731925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.732887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.732915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.733031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.733057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.417 qpair failed and we were unable to recover it. 00:34:43.417 [2024-07-23 18:22:50.733174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.417 [2024-07-23 18:22:50.733200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.733954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.733980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.734908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.734934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.735881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.735907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.736879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.736905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.737965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.737991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.738137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.738163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.738292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.418 [2024-07-23 18:22:50.738326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.418 qpair failed and we were unable to recover it. 00:34:43.418 [2024-07-23 18:22:50.738448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.738474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.738589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.738614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.738732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.738757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.738851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.738876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.738989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.739873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.739899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.740939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.740965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.741873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.741899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.742884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.742909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.743027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.743052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.743179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.743204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.743327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.743353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.743450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.743476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.419 [2024-07-23 18:22:50.743600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.419 [2024-07-23 18:22:50.743627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.419 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.743775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.743801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.743918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.743944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.744970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.744996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.745938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.745969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.746943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.746968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.747858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.747884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.748831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.748856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.420 qpair failed and we were unable to recover it. 00:34:43.420 [2024-07-23 18:22:50.749004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.420 [2024-07-23 18:22:50.749030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.749896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.749922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.750961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.750986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.751973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.751999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.752924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.752950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.753103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.753128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.753246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.753271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.753369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.753397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.753544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.421 [2024-07-23 18:22:50.753570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.421 qpair failed and we were unable to recover it. 00:34:43.421 [2024-07-23 18:22:50.753695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.753720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.753846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.753871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.753985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.754973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.754999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.755898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.755923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.756901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.756927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.757875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.757977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.758897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.758923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.759017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.759043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.422 qpair failed and we were unable to recover it. 00:34:43.422 [2024-07-23 18:22:50.759188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.422 [2024-07-23 18:22:50.759213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.759313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.759347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.759453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.759479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.759642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.759667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.759794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.759820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.759943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.759969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.760966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.760992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.761893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.761918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.762895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.762920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.763893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.763985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.764011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.764130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.764156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.764274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.764300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.764446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.764472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.423 [2024-07-23 18:22:50.764598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.423 [2024-07-23 18:22:50.764623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.423 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.764743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.764768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.764889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.764914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.765905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.765930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.766914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.766940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.767880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.767906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.768923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.768948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.769076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.769101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.769193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.769218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.769342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.769368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.424 [2024-07-23 18:22:50.769464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.424 [2024-07-23 18:22:50.769489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.424 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.769605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.769631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.769778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.769803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.769898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.769924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.770966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.770991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.771942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.771967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.772940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.772965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.773888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.773914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.425 [2024-07-23 18:22:50.774819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.425 qpair failed and we were unable to recover it. 00:34:43.425 [2024-07-23 18:22:50.774969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.774994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.775893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.775990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.776960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.776985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.777932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.777958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.778901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.778926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.779023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.779047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.779142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.779168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.426 qpair failed and we were unable to recover it. 00:34:43.426 [2024-07-23 18:22:50.779294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.426 [2024-07-23 18:22:50.779327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.779429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.779455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.779554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.779580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.779707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.779732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.779814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.779839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.779954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.779979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.780935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.780960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.781914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.781939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.782881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.782993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.783854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.783975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.784005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.784134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.784159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.784283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.784308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.784454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.427 [2024-07-23 18:22:50.784480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.427 qpair failed and we were unable to recover it. 00:34:43.427 [2024-07-23 18:22:50.784573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.784598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.784716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.784742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.784828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.784853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.784973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.785853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.785879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.786964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.786989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.787883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.787908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.788920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.788946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.789902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.428 [2024-07-23 18:22:50.789927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.428 qpair failed and we were unable to recover it. 00:34:43.428 [2024-07-23 18:22:50.790048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.790986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.791903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.791928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.792932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.792958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.793898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.793925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.794833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.794974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.795002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.795126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.795152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.795281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.795321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.429 qpair failed and we were unable to recover it. 00:34:43.429 [2024-07-23 18:22:50.795466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.429 [2024-07-23 18:22:50.795498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.795639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.795671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.795800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.795826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.795945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.796893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.796919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.797957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.797983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.798973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.798999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.430 [2024-07-23 18:22:50.799856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.430 qpair failed and we were unable to recover it. 00:34:43.430 [2024-07-23 18:22:50.799944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.799969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.800939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.801854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.801880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.802912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.802937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.803849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.803874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.804897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.804922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.805047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.805073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.805207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.805233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.805346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.431 [2024-07-23 18:22:50.805377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.431 qpair failed and we were unable to recover it. 00:34:43.431 [2024-07-23 18:22:50.805473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.805499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.805639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.805664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.805812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.805837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.805964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.805989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.806905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.806931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.807920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.807945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.808973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.808998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.809881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.809906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.432 [2024-07-23 18:22:50.810691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.432 [2024-07-23 18:22:50.810718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.432 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.810816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.810841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.810999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.811867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.811892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.812833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.812980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.813864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.813889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.814864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.814981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.815880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.433 [2024-07-23 18:22:50.815906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.433 qpair failed and we were unable to recover it. 00:34:43.433 [2024-07-23 18:22:50.816030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.816958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.816983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.817916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.817940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.818937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.818962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.819858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.819983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.434 qpair failed and we were unable to recover it. 00:34:43.434 [2024-07-23 18:22:50.820969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.434 [2024-07-23 18:22:50.820994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.821865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.821985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.822910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.822936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.823931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.823956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.824898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.824991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.825016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.825133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.825158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.825255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.825281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.825409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.435 [2024-07-23 18:22:50.825435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.435 qpair failed and we were unable to recover it. 00:34:43.435 [2024-07-23 18:22:50.825581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.825607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.825696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.825721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.825839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.825864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.825977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.826944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.826970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.827864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.827981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.828128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.828251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.828437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.828663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.828811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.828836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.829877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.829902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.830960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.830986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.831147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.831173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.436 [2024-07-23 18:22:50.831286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.436 [2024-07-23 18:22:50.831311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.436 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.831418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.831444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.831560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.831585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.831677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.831703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.831802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.831827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.831971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.831996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.832881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.832907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.833879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.833972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.834864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.834985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.835870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.835996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.836022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.836118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.836143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.437 [2024-07-23 18:22:50.836236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.437 [2024-07-23 18:22:50.836261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.437 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.836383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.836409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.836525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.836550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.836669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.836693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.836816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.836841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.836956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.836982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.837858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.837980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.838891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.838983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.839962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.839987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.840927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.840952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.841042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.841068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.841203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.841228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.841362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.438 [2024-07-23 18:22:50.841388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.438 qpair failed and we were unable to recover it. 00:34:43.438 [2024-07-23 18:22:50.841513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.841540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.841686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.841711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.841857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.841882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.841997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.842904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.842929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.843891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.843916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.844865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.844890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.845895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.845921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.439 [2024-07-23 18:22:50.846821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.439 qpair failed and we were unable to recover it. 00:34:43.439 [2024-07-23 18:22:50.846919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.846945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.847964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.847989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.848880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.848905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.849882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.849982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.850875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.850995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.851025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.851128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.851153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.851250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.440 [2024-07-23 18:22:50.851275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-07-23 18:22:50.851377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.851403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.851501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.851526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.851649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.851674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.851795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.851820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.851943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.851968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.852941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.852966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.853870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.853896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.854658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1ef0 (9): Bad file descriptor 00:34:43.441 [2024-07-23 18:22:50.854857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.854896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.855893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.855918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.856039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.856162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.856311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.856467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.441 [2024-07-23 18:22:50.856613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.441 qpair failed and we were unable to recover it. 00:34:43.441 [2024-07-23 18:22:50.856703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.856728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.856819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.856844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.856963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.856989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.857960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.857985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.858830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.858978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.859855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.859988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.860858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.860884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.861948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.861973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.862118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.442 [2024-07-23 18:22:50.862143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.442 qpair failed and we were unable to recover it. 00:34:43.442 [2024-07-23 18:22:50.862274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.862313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.862441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.862468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.862596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.862622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.862739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.862765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.862894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.862919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.863961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.863987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.864952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.864980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.865918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.865943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.866921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.866947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.867097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.867125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.867246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.867272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.867401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.867427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.867525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.867550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.443 [2024-07-23 18:22:50.867665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.443 [2024-07-23 18:22:50.867690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.443 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.867789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.867815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.867937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.867962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.868955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.868980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.869958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.869984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.870937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.870963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.871882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.871907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.872007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.872032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.444 qpair failed and we were unable to recover it. 00:34:43.444 [2024-07-23 18:22:50.872143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.444 [2024-07-23 18:22:50.872169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.872906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.872931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.873925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.873950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.874862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.874985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.875902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.875927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.876913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.876939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.877081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.877106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.877230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.877256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.877348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.445 [2024-07-23 18:22:50.877375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.445 qpair failed and we were unable to recover it. 00:34:43.445 [2024-07-23 18:22:50.877486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.877511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.877640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.877665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.877781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.877806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.877952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.877977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.878931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.878958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.879934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.879959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.880874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.880999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.881961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.881987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.446 [2024-07-23 18:22:50.882813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.446 qpair failed and we were unable to recover it. 00:34:43.446 [2024-07-23 18:22:50.882901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.882926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.883889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.883914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.884869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.884895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.885893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.885918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.886890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.886917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.887886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.887911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.888056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.888081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.888203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.447 [2024-07-23 18:22:50.888229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.447 qpair failed and we were unable to recover it. 00:34:43.447 [2024-07-23 18:22:50.888342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.888369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.888468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.888493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.888640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.888666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.888791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.888816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.888932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.888958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.889927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.889952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.890854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.890881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.891970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.891995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.892955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.892981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.893093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.893118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.893245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.893271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.893365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.893391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.893504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.448 [2024-07-23 18:22:50.893529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.448 qpair failed and we were unable to recover it. 00:34:43.448 [2024-07-23 18:22:50.893617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.893642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.893741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.893768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.893887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.893913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.894868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.894893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.895900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.895984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.896833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.896991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.897970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.897995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.898089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.898114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.449 qpair failed and we were unable to recover it. 00:34:43.449 [2024-07-23 18:22:50.898238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.449 [2024-07-23 18:22:50.898263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.898361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.898490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.898605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.898630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.898754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.898779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.898903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.898928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.899928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.899955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.900860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.900981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.901879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.901991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.902948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.902975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.903075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.903101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.903267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.903292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.903447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.903474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.450 qpair failed and we were unable to recover it. 00:34:43.450 [2024-07-23 18:22:50.903604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.450 [2024-07-23 18:22:50.903629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.903722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.903748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.903896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.903921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.904856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.904881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.905891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.905917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.906932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.906957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.907919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.907944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.908955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.908980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.909067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.451 [2024-07-23 18:22:50.909092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.451 qpair failed and we were unable to recover it. 00:34:43.451 [2024-07-23 18:22:50.909213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.909363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.909514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.909662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.909779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.909914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.909944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.910842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.910868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.911974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.911999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.912920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.912945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.913964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.913989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.914108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.914133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.914257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.914282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.914438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.914464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.452 [2024-07-23 18:22:50.914605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.452 [2024-07-23 18:22:50.914630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.452 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.914774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.914799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.914912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.914936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.915927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.915952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.916885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.916910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.917895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.917992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.918924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.918950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.919889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.919915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.920031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.920057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.453 [2024-07-23 18:22:50.920199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.453 [2024-07-23 18:22:50.920225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.453 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.920346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.920372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.920468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.920493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.920621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.920647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.920766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.920791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.920886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.920911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.921924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.921948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.922889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.922914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.923958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.923983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.924130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.924273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.924299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.924426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.924451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.924567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.924593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.454 qpair failed and we were unable to recover it. 00:34:43.454 [2024-07-23 18:22:50.924715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.454 [2024-07-23 18:22:50.924739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.924886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.924911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.924999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.925867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.925892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.926908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.926997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.927841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.927991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.928875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.928998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.455 [2024-07-23 18:22:50.929808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.455 qpair failed and we were unable to recover it. 00:34:43.455 [2024-07-23 18:22:50.929932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.929958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.930964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.930994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.931910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.931934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.932897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.932922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.933968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.934892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.934918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.935064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.935187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.935212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.935302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.456 [2024-07-23 18:22:50.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.456 qpair failed and we were unable to recover it. 00:34:43.456 [2024-07-23 18:22:50.935425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.935451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.935545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.935570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.935690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.935716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.935826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.935851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.935964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.935990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.936861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.936886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.937945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.937970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.938931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.938959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.939947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.939974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.940902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.940986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.457 [2024-07-23 18:22:50.941011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.457 qpair failed and we were unable to recover it. 00:34:43.457 [2024-07-23 18:22:50.941158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.941285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.941481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.941624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.941799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.941951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.941977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.942859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.942884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.943827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.943854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.944895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.944992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.945953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.945980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.946115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.458 [2024-07-23 18:22:50.946145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.458 qpair failed and we were unable to recover it. 00:34:43.458 [2024-07-23 18:22:50.946241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.946352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.946469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.946609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.946754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.946896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.946921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.947913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.947939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.948888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.948913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.949007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.949033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.949158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.949183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.949303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.949334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.949439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.459 [2024-07-23 18:22:50.949465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.459 qpair failed and we were unable to recover it. 00:34:43.459 [2024-07-23 18:22:50.949558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.949584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.949673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.949697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.949792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.949817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.949903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.949928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.950894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.950920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.951899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.951925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.952851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.952998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.953869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.953998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.954031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.954158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.954184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.954282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.460 [2024-07-23 18:22:50.954309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.460 qpair failed and we were unable to recover it. 00:34:43.460 [2024-07-23 18:22:50.954418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.954444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.954526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.954551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.954671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.954696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.954816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.954840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.954938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.954963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.955955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.955981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.956969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.956997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.957830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.957978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.958911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.461 [2024-07-23 18:22:50.958936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.461 qpair failed and we were unable to recover it. 00:34:43.461 [2024-07-23 18:22:50.959061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.959213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.959382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.959540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.959716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.959838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.959863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.960863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.960979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.961909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.961935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.962895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.962921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.963918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.963943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.964031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.462 [2024-07-23 18:22:50.964056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.462 qpair failed and we were unable to recover it. 00:34:43.462 [2024-07-23 18:22:50.964180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.964332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.964452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.964574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.964720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.964864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.964891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.965957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.965982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.966927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.966952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.967948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.967974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.968066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.463 [2024-07-23 18:22:50.968091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.463 qpair failed and we were unable to recover it. 00:34:43.463 [2024-07-23 18:22:50.968176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.968888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.968984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.969880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.969905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.970970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.970995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.971882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.971909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.972068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.972096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.464 [2024-07-23 18:22:50.972193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.464 [2024-07-23 18:22:50.972220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.464 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.972353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.972381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.972484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.972511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.972609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.972634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.972753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.972779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.972878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.972903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.973870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.973897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.974847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.975959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.975984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.976075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.465 [2024-07-23 18:22:50.976100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.465 qpair failed and we were unable to recover it. 00:34:43.465 [2024-07-23 18:22:50.976196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.976374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.976491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.976637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.976750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.976896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.976921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.977923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.977948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.978943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.978968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.979851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.979877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.980055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.980180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.980295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.980433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.466 [2024-07-23 18:22:50.980583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.466 qpair failed and we were unable to recover it. 00:34:43.466 [2024-07-23 18:22:50.980725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.980751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.980867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.980893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.981900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.981925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.982894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.982919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.983874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.983993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.984959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.984985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.985134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.985160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.467 [2024-07-23 18:22:50.985283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.467 [2024-07-23 18:22:50.985309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.467 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.985406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.985431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.985558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.985583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.985697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.985722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.985868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.985893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.985984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.986883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.986911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.987864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.987890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.988863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.988890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.989007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.989034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.989153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.989178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.989301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.989342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.989444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.989469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.468 [2024-07-23 18:22:50.989561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.468 [2024-07-23 18:22:50.989585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.468 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.989671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.989695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.989815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.989839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.989960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.989986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.990862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.990889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.991936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.991962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.469 [2024-07-23 18:22:50.992909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.469 [2024-07-23 18:22:50.992935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.469 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.993880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.993906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.994913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.994937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.995924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.995949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.996067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.996092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.996218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.996243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.996341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.996368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.996465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.470 [2024-07-23 18:22:50.996490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.470 qpair failed and we were unable to recover it. 00:34:43.470 [2024-07-23 18:22:50.996608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.996634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.996776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.996801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.996913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.996937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.997893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.997920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.998906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.998931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:50.999865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:50.999892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.000865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.000987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.001012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.471 qpair failed and we were unable to recover it. 00:34:43.471 [2024-07-23 18:22:51.001103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.471 [2024-07-23 18:22:51.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.001876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.001994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.002942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.002967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.003858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.003883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.004029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.004149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.004263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.004413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.472 [2024-07-23 18:22:51.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.472 qpair failed and we were unable to recover it. 00:34:43.472 [2024-07-23 18:22:51.004675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.004701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.004831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.004858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.005961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.005987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.006907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.006932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.007858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.007883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.008001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.008028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.008144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.008169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.008293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.008324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.008447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.473 [2024-07-23 18:22:51.008477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.473 qpair failed and we were unable to recover it. 00:34:43.473 [2024-07-23 18:22:51.008603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.008630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.008725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.008749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.008868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.008892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.008984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.009970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.009995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.010891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.010917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.011858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.011883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.012008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.012033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.012161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.012186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.012304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.012342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.012435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.474 [2024-07-23 18:22:51.012462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.474 qpair failed and we were unable to recover it. 00:34:43.474 [2024-07-23 18:22:51.012581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.012606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.012731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.012756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.012851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.012877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.012972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.012997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.013944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.013973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.014917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.014942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.015038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.015063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.015206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.015232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.015344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.015370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.475 [2024-07-23 18:22:51.015490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.475 [2024-07-23 18:22:51.015516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.475 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.015661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.015686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.015816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.015842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.015960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.015985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.016927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.016953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.017907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.017997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.018955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.018980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.476 [2024-07-23 18:22:51.019772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.476 qpair failed and we were unable to recover it. 00:34:43.476 [2024-07-23 18:22:51.019890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.019915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.020850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.020876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.021864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.021889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.022960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.022985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.023950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.023975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.024123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.024148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.024266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.024292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.024422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.477 [2024-07-23 18:22:51.024448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.477 qpair failed and we were unable to recover it. 00:34:43.477 [2024-07-23 18:22:51.024565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.024590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.024679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.024704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.024819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.024844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.024963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.024988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.025928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.025956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.026853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.026977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.027866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.027989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.028896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.028922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.029041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.029068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.029171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.029197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.029290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.029323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.478 [2024-07-23 18:22:51.029451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.478 [2024-07-23 18:22:51.029477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.478 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.029596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.029621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.029748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.029774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.029897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.029923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.030932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.030958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.031867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.031986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.032860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.032886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.033016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.033042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.033160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.033186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.033275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.033300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.033414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.033443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.479 [2024-07-23 18:22:51.033544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.479 [2024-07-23 18:22:51.033570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.479 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.033689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.033716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.033832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.033858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.033944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.033970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.034885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.034984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.035850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.035999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.036878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.036906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.037966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.037991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.480 qpair failed and we were unable to recover it. 00:34:43.480 [2024-07-23 18:22:51.038083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.480 [2024-07-23 18:22:51.038110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.038915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.038940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.039899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.039925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.040838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.040986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.041020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.041185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.481 [2024-07-23 18:22:51.041211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.481 qpair failed and we were unable to recover it. 00:34:43.481 [2024-07-23 18:22:51.041310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.041344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.041468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.041493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.041614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.041640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.041762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.041787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.041933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.041959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.042890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.042985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.043952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.043977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.044868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.044987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.045902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.482 [2024-07-23 18:22:51.045928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.482 qpair failed and we were unable to recover it. 00:34:43.482 [2024-07-23 18:22:51.046046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.046959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.046985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.047934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.047959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.048883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.048916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.049943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.049969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.483 qpair failed and we were unable to recover it. 00:34:43.483 [2024-07-23 18:22:51.050820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.483 [2024-07-23 18:22:51.050845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-07-23 18:22:51.050943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.484 [2024-07-23 18:22:51.050970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-07-23 18:22:51.051056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.484 [2024-07-23 18:22:51.051082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.484 qpair failed and we were unable to recover it. 00:34:43.484 [2024-07-23 18:22:51.051191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.762 [2024-07-23 18:22:51.051216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.762 qpair failed and we were unable to recover it. 00:34:43.762 [2024-07-23 18:22:51.051306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.762 [2024-07-23 18:22:51.051338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.762 qpair failed and we were unable to recover it. 00:34:43.762 [2024-07-23 18:22:51.051437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.762 [2024-07-23 18:22:51.051463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.762 qpair failed and we were unable to recover it. 00:34:43.762 [2024-07-23 18:22:51.051590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.762 [2024-07-23 18:22:51.051616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.762 qpair failed and we were unable to recover it. 00:34:43.762 [2024-07-23 18:22:51.051712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.762 [2024-07-23 18:22:51.051739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.762 qpair failed and we were unable to recover it. 00:34:43.762 [2024-07-23 18:22:51.051862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.051893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.051993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.052898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.052923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.053888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.053914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.054890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.054915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.055924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.055950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.056945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.057095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.057128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.763 [2024-07-23 18:22:51.057258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.763 [2024-07-23 18:22:51.057285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.763 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.057427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.057454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.057602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.057628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.057749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.057775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.057927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.057957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.058943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.058968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.059842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.059867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.060938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.060963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.061949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.061975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.764 qpair failed and we were unable to recover it. 00:34:43.764 [2024-07-23 18:22:51.062819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.764 [2024-07-23 18:22:51.062845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.062937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.062962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.063914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.063940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.064926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.064954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.065117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.065270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.065447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.065608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.065801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.065984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.066904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.066930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.067948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.067988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.068180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.068238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.068386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.068423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.068543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.068584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.765 [2024-07-23 18:22:51.068748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.765 [2024-07-23 18:22:51.068782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.765 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.068974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.069851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.069996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.070029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.070219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.070284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.070493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.070526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.070663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.070695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.070818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.070851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.071958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.071990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.072886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.072913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.073833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.073883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.074917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.074950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.075114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.766 [2024-07-23 18:22:51.075148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.766 qpair failed and we were unable to recover it. 00:34:43.766 [2024-07-23 18:22:51.075284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.075448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.075567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.075722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.075828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.075972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.075998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.076096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.076123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.076255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.076302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.076446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.076474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.076566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.076592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.076769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.077927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.077953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.078882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.078917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.079929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.079957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.767 qpair failed and we were unable to recover it. 00:34:43.767 [2024-07-23 18:22:51.080075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.767 [2024-07-23 18:22:51.080103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.080925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.080959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.081961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.081995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.082152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.082185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.082344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.082379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.082531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.082566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.082683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.082716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.082871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.082904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.083843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.083877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.084979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.768 [2024-07-23 18:22:51.085884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.768 [2024-07-23 18:22:51.085910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.768 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.086865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.086892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.087962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.087987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.088932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.088966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.089117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.089152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.089351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.089402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.089536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.089587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.089750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.089787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.089943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.089979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.090922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.090948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.091083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.091121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.091270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.091297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.091403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.769 [2024-07-23 18:22:51.091428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.769 qpair failed and we were unable to recover it. 00:34:43.769 [2024-07-23 18:22:51.091527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.091552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.091696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.091721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.091819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.091844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.091967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.091993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.092847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.092874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.093834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.093978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.094948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.094975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.095969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.095994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.096111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.096259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.096428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.096561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.770 [2024-07-23 18:22:51.096685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.770 qpair failed and we were unable to recover it. 00:34:43.770 [2024-07-23 18:22:51.096804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.096839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.096997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.097178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.097367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.097570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.097767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.097960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.097987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.098997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.099967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.099991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.100944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.100969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.101944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.101970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.102116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.102142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.102263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.102289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.771 [2024-07-23 18:22:51.102391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.771 [2024-07-23 18:22:51.102416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.771 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.102504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.102529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.102618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.102643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.102744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.102768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.102865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.102890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.103854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.103879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.104956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.104989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.105103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.105279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.105341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.105516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.105544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.105640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.105667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.105850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.105883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.106941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.106974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.107135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.772 [2024-07-23 18:22:51.107294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.772 [2024-07-23 18:22:51.107337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.772 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.107455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.107482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.107608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.107633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.107724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.107750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.107877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.107903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.107999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.108152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.108270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.108432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.108590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.108815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.108850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.109030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.109065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.109222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.109257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.109446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.109499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.109641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.109678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.109884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.109921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.110150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.110223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.110382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.110425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.110551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.110585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.110703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.110736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.110866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.110899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.111071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.111227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.111472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.111674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.111867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.111997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.112032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.112223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.112274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.112491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.112528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.112719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.112802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.112957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.113020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.113299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.113391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.113527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.113561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.113716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.113750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.113871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.113905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.114039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.114077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.114296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.114342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.114461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.773 [2024-07-23 18:22:51.114496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.773 qpair failed and we were unable to recover it. 00:34:43.773 [2024-07-23 18:22:51.114643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.114677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.114830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.114864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.115020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.115054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.115243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.115295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.115449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.115484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.115633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.115667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.115829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.115869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.116050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.116084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.116202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.116253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.116389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.116424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.116590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.116626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.116925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.116959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.117183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.117233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.117475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.117512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.117687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.117749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.117918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.117995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.118221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.118285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.118492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.118548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.118760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.118797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.118914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.118948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.119139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.119174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.119288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.119333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.119486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.119548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.119792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.119838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.119990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.120186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.120356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.120551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.120748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.120945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.120982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.121145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.121184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.121356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.121396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.121549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.121585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.121786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.121829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.121953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.121989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.122153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.122189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.122370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.122427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.122602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.774 [2024-07-23 18:22:51.122641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.774 qpair failed and we were unable to recover it. 00:34:43.774 [2024-07-23 18:22:51.122778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.123946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.123982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.124136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.124172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.124310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.124373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.124510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.124545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.124697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.124730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.124881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.124914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.125931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.125964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.126953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.126984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.127871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.127982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.128133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.128338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.128481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.128633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.128838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.128870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.129017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.129053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.129210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.775 [2024-07-23 18:22:51.129246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.775 qpair failed and we were unable to recover it. 00:34:43.775 [2024-07-23 18:22:51.129392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.129425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.129566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.129598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.129723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.129771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.129980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.130160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.130332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.130498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.130668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.130874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.130912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.131927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.131958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.132135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.132272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.132490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.132670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.132836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.132996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.133031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.133186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.133222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.133427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.133460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.133627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.133663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.133824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.133860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.134060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.134097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.134300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.134343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.134495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.134527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.134701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.134734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.134846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.134894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.135044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.135084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.135245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.135284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.135445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.776 [2024-07-23 18:22:51.135479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.776 qpair failed and we were unable to recover it. 00:34:43.776 [2024-07-23 18:22:51.135628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.135661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.135808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.135840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.136913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.136950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.137104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.137141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.137293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.137342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.137503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.137535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.137658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.137695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.137857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.137895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.138068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.138104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.138265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.138301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.138477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.138509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.138714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.138751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.138873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.138907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.139120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.139368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.139511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.139704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.139854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.139990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.140154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.140373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.140518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.140751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.140939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.140975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.141167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.141203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.141379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.141411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.141556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.141588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.141756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.141794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.141982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.142031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.142205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.142242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.142421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.142454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.142563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.142595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.142745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.777 [2024-07-23 18:22:51.142780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.777 qpair failed and we were unable to recover it. 00:34:43.777 [2024-07-23 18:22:51.142983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.143170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.143352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.143502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.143656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.143879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.143917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.144113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.144151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.144292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.144355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.144493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.144537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.144747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.144787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.144930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.144970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.145164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.145220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.145432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.145471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.145632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.145669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.145806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.145854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.145973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.146166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.146342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.146571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.146758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.146941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.146973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.147136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.147174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.147355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.147395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.147572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.147604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.147759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.147791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.147933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.147964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.148930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.148963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.149095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.149127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.149325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.149358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.149539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.149590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.149750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.149785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.149950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.149987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.150204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.150236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.778 qpair failed and we were unable to recover it. 00:34:43.778 [2024-07-23 18:22:51.150378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.778 [2024-07-23 18:22:51.150411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.150617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.150661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.150898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.150961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.151181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.151226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.151395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.151431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.151593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.151629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.151775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.151806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.151932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.151964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.152158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.152196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.152395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.152433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.152574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.152612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.152770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.152807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.152930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.152961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.153084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.153116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.153286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.153344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.153542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.153580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.153791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.153822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.153967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.153998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.154149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.154180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.154372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.154405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.154544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.154575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.154752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.154960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.154998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.155129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.155167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.155302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.155342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.155509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.155559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.155737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.155775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.155955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.155993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.156162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.156200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.156333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.156374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.156517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.156555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.156720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.156758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.156925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.156963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.157103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.157143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.157326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.157359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.157461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.157492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.157663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.157695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.157844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.157882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.158067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.779 [2024-07-23 18:22:51.158103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.779 qpair failed and we were unable to recover it. 00:34:43.779 [2024-07-23 18:22:51.158214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.158245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.158379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.158411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.158554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.158592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.158728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.158765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.158948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.158986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.159147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.159185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.159365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.159404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.159616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.159654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.159821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.159859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.159995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.160176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.160346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.160533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.160700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.160904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.160941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.161109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.161140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.161285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.161324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.161553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.161584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.161701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.161731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.161911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.161942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.162081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.162280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.162508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.162716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.162854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.162992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.163039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.163175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.163212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.163390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.163429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.163585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.163623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.163822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.163859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.163993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.164030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.164186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.164222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.164386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.164423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.164585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.164621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.164769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.164806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.165014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.165045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.165201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.165238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.165403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.165440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.780 [2024-07-23 18:22:51.165572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.780 [2024-07-23 18:22:51.165610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.780 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.165806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.165843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.166019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.166062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.166210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.166247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.166410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.166448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.166632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.166669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.166797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.166856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.167027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.167068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.167234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.167275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.167490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.167530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.167739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.168006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.168043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.168263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.168306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.168522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.168558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.168686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.168888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.168926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.169072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.169132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.169331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.169373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.169538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.169575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.169715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.169752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.169947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.169984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.170159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.170196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.170370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.170409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.170535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.170571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.170724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.170760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.170955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.170993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.171149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.171189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.171359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.171401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.171555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.171594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.171764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.171810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.171984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.172023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.172181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.172220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.172407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.172447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.172622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.781 [2024-07-23 18:22:51.172661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.781 qpair failed and we were unable to recover it. 00:34:43.781 [2024-07-23 18:22:51.172814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.172852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.173037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.173076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.173230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.173270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.173463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.173505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.173710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.173749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.173986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.174044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.174306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.174397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.174566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.174605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.174738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.174778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.174993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.175032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.175214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.175253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.175431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.175470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.175611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.175650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.175770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.175809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.175992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.176031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.176243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.176281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.176441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.176482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.176690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.176729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.176876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.176918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.177102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.177160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.177369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.177411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.177557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.177597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.177743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.177783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.177962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.178003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.178178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.178220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.178406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.178446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.178623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.178663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.178865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.178904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.179096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.179135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.179287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.179334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.179507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.179547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.179762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.179801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.179982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.180022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.180232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.180291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.180478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.180541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.180770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.180849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.181090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.181156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.181360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.181402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.782 [2024-07-23 18:22:51.181547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.782 [2024-07-23 18:22:51.181587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.782 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.181768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.181807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.181957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.181997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.182201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.182260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.182469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.182511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.182665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.182705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.182883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.182922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.183096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.183136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.183307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.183354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.183515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.183554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.183814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.183872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.184145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.184203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.184568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.184644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.184888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.184965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.185223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.185282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.185529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.185606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.185800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.185869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.186102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.186162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.186399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.186441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.186624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.186664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.186818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.186857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.187068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.187108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.187307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.187359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.187525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.187567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.187778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.187820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.188000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.188042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.188208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.188250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.188420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.188616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.188659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.188885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.188927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.189088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.189130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.189274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.189337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.189567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.189609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.189759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.189801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.189951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.189993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.190156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.190198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.190364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.190408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.190583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.190625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.190795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.783 [2024-07-23 18:22:51.190837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.783 qpair failed and we were unable to recover it. 00:34:43.783 [2024-07-23 18:22:51.191022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.191065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.191211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.191253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.191523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.191566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.191788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.191830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.191979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.192020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.192154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.192194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.192351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.192394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.192591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.192632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.192820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.192863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.193017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.193082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.193332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.193375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.193564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.193607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.193827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.193869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.194053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.194095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.194287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.194340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.194531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.194573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.194727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.194768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.194938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.194978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.195162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.195202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.195389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.195433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.195611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.195654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.195808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.195850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.196039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.196079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.196223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.196264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.196422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.196464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.196616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.196658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.196840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.196882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.197064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.197112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.197253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.197294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.197453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.197496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.197675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.197729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.198010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.198064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.198282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.198378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.198552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.198594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.198803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.198844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.199046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.199090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.199259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.199303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.199547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.784 [2024-07-23 18:22:51.199591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.784 qpair failed and we were unable to recover it. 00:34:43.784 [2024-07-23 18:22:51.199753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.199798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.200019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.200064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.200258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.200300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.200505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.200547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.200721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.200762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.200975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.201017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.201211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.201265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.201485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.201528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.201698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.201740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.201924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.201967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.202211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.202265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.202515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.202708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.202750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.202905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.202945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.203102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.203143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.203284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.203337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.203555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.203597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.203763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.203806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.203943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.203984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.204172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.204213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.204398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.204440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.204591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.204631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.204834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.204876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.205063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.205105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.205253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.205295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.205500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.205544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.205742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.205786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.205957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.206001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.206194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.206237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.206439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.206483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.206662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.206705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.206920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.206962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.207145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.207187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.207358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.207401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.207547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.207608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.207770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.207815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.785 [2024-07-23 18:22:51.207981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.785 [2024-07-23 18:22:51.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.785 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.208228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.208282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.208531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.208586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.208797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.208865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.209082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.209125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.209354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.209600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.209645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.209837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.209881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.210139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.210212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.210468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.210514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.210692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.210736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.210972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.211016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.211207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.211252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.211464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.211510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.211705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.211749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.211918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.212128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.212173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.212363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.212408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.212644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.212688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.212857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.212901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.213078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.213132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.213340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.213394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.213597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.213641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.213868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.213913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.214059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.214103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.214278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.214332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.214536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.214581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.214803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.214848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.215012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.215057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.215255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.215299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.215501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.215546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.215738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.215782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.216013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.216067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.216353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.216409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.216661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.216744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.217005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.217050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.786 qpair failed and we were unable to recover it. 00:34:43.786 [2024-07-23 18:22:51.217277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.786 [2024-07-23 18:22:51.217335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.217491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.217535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.217688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.217733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.217886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.217930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.218123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.218167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.218337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.218383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.218616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.218660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.218856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.218901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.219127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.219171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.219314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.219370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.219534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.219577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.219768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.219815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.220026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.220070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.220342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.220409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.220576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.220621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.220815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.220859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.221063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.221107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.221299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.221377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.221583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.221630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.221876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.221924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.222095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.222149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.222351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.222400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.222605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.222653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.222843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.222886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.223070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.223139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.223396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.223442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.223633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.223688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.223851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.223895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.224119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.224163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.224367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.224412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.224604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.224648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.224816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.224862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.225043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.225331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.225376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.225526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.225570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.225777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.225820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.226020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.787 [2024-07-23 18:22:51.226062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.787 qpair failed and we were unable to recover it. 00:34:43.787 [2024-07-23 18:22:51.226273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.226376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.226547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.226590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.226747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.226790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.227031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.227105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.227339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.227386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.227613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.227657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.227945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.228017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.228264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.228308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.228531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.228602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.228873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.228917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.229199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.229276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.229552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.229596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.229744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.229788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.229989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.230033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.230282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.230346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.230553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.230600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.230816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.230871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.231105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.231152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.231331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.231379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.231617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.231664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.231859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.231906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.232138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.232186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.232348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.232396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.232545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.232592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.232794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.232841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.233076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.233124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.233294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.233353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.233554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.233601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.233804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.233851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.234027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.234072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.234250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.234295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.234532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.234578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.234796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.234841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.235045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.235091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.235259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.235304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.235515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.235561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.235729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.235776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.235977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.788 [2024-07-23 18:22:51.236024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.788 qpair failed and we were unable to recover it. 00:34:43.788 [2024-07-23 18:22:51.236214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.236261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.236450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.236500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.236712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.236759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.237005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.237052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.237253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.237301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.237510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.237598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.237880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.237953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.238219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.238273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.238542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.238615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.238890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.238945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.239242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.239296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.239594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.239668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.239896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.239967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.240197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.240251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.240572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.240651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.240941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.241278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.241344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.241544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.241592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.241820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.241871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.242049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.242107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.242359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.242412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.242629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.242680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.242899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.242949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.243181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.243231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.243493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.243544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.243767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.243817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.244026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.244074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.244278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.244342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.244563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.244610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.244809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.244856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.245086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.245162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.245419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.245470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.245681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.245731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.245991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.246041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.246228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.246305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.246600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.246650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.246843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.246893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.247101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.789 [2024-07-23 18:22:51.247155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.789 qpair failed and we were unable to recover it. 00:34:43.789 [2024-07-23 18:22:51.247410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.247462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.247721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.247771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.247991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.248040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.248200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.248268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.248562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.248635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.248874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.248924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.249135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.249185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.249392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.249443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.249632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.249689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.249919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.249970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.250164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.250214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.250401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.250453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.250664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.250714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.250896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.250946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.251155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.251205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.251425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.251476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.251653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.251703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.251935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.251985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.252156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.252206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.252434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.252485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.252706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.252756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.252930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.252981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.253233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.253287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.253541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.253591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.253790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.253840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.254069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.254124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.254305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.254379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.254626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.254702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.254978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.255028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.255266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.255553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.255603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.255834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.790 [2024-07-23 18:22:51.255889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.790 qpair failed and we were unable to recover it. 00:34:43.790 [2024-07-23 18:22:51.256156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.256209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.256422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.256494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.256780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.256835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.257038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.257107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.257389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.257442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.257614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.257664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.257838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.257890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.258078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.258128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.258391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.258441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.258632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.258683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.258935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.258985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.259234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.259283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.259516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.259567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.259750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.259801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.259983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.260043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.260261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.260311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.260538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.260589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.260808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.260865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.261059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.261109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.261369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.261420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.261619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.261669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.261875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.261925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.262116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.262167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.262387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.262438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.262651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.262701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.262898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.262949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.263137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.263187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.263374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.263426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.263601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.263651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.263864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.263914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.264161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.264211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.264444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.264495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.264751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.264802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.265020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.265073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.265287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.265352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.265557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.265608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.265842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.265892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.266056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.791 [2024-07-23 18:22:51.266106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.791 qpair failed and we were unable to recover it. 00:34:43.791 [2024-07-23 18:22:51.266327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.266380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.266597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.266646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.266896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.266946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.267199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.267250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.267435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.267486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.267670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.267720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.267947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.268005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.268198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.268252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.268506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.268558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.268784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.268833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.269027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.269080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.269353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.269406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.269621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.269671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.269910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.269961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.270232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.270282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.270509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.270559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.270802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.270851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.271046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.271096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.271308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.271372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.271594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.271645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.271897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.271950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.272154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.272208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.272454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.272509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.272779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.272833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.273073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.273127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.273332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.273389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.273639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.273711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.273968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.274039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.274294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.274361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.274595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.274649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.274880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.274934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.275158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.275212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.275501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.275557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.275759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.275812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.276039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.276092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.792 [2024-07-23 18:22:51.276327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.792 [2024-07-23 18:22:51.276381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.792 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.276610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.276666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.276852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.276905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.277140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.277194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.277448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.277503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.277768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.277821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.278098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.278152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.278389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.278444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.278666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.278988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.279042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.279274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.279343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.279540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.279594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.279842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.279908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.280134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.280187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.280355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.280412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.280682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.280735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.280987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.281060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.281293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.281369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.281634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.281688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.281918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.281971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.282168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.282221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.282497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.282571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.282886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.282971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.283232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2504650 Killed "${NVMF_APP[@]}" "$@" 00:34:43.793 [2024-07-23 18:22:51.283286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.283507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.283562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.283785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.283867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:43.793 [2024-07-23 18:22:51.284170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.284243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:43.793 [2024-07-23 18:22:51.284489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:43.793 [2024-07-23 18:22:51.284544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.284717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.284771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:43.793 [2024-07-23 18:22:51.285041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.793 [2024-07-23 18:22:51.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.285356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.285411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.285657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.285711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.285914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.285968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.286240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.286295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.286555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.286620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.286811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.793 [2024-07-23 18:22:51.286867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.793 qpair failed and we were unable to recover it. 00:34:43.793 [2024-07-23 18:22:51.287091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.287147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.287438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.287493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.287742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.287796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.288072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.288127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.288392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.288446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.288709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.288763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.288996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.289050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.289326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.289381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.289608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.289680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.289936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2505207 00:34:43.794 [2024-07-23 18:22:51.290008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2505207 00:34:43.794 [2024-07-23 18:22:51.290255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.290308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2505207 ']' 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.794 [2024-07-23 18:22:51.290599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.290652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.794 [2024-07-23 18:22:51.290878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.794 [2024-07-23 18:22:51.290931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:43.794 [2024-07-23 18:22:51.291167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.291222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.794 [2024-07-23 18:22:51.291488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.291562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.291890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.291945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.292216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.292271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.292483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.292518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.292644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.292677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.292830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.292862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.293945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.293978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.294129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.294162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.294287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.294328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.294465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.294497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.294622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.294654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.794 [2024-07-23 18:22:51.294773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.794 [2024-07-23 18:22:51.294805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.794 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.294925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.294958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.295100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.295131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.295281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.295312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.295481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.295514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.295688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.295721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.295874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.295906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.296889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.296921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.297902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.297932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.298882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.298997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.299174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.299332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.299520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.299704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.795 [2024-07-23 18:22:51.299871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.795 [2024-07-23 18:22:51.299901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.795 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.300858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.300972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.301859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.301993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.302888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.302917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.303835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.304910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.304938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.305047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.305075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.305207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.305236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.305374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.305403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.796 qpair failed and we were unable to recover it. 00:34:43.796 [2024-07-23 18:22:51.305516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.796 [2024-07-23 18:22:51.305544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.305641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.305669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.305804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.305832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.305939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.305967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.306958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.306989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.307873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.307902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.308875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.308905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.309870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.309899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.310866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.310998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.311033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.311144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.311175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.797 [2024-07-23 18:22:51.311302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.797 [2024-07-23 18:22:51.311339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.797 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.311486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.311515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.311642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.311670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.311810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.311839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.311966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.311995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.312906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.312933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.313916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.313942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.314939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.314967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.315857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.315971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.798 qpair failed and we were unable to recover it. 00:34:43.798 [2024-07-23 18:22:51.316960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.798 [2024-07-23 18:22:51.316986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.317955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.317982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.318833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.318969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.319855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.319983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.320886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.320912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.321886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.321912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.322010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.799 [2024-07-23 18:22:51.322036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.799 qpair failed and we were unable to recover it. 00:34:43.799 [2024-07-23 18:22:51.322161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.322969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.322995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.323946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.323972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.324915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.324940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.325965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.325992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.326145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.326172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.326299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.326331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.800 [2024-07-23 18:22:51.326438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.800 [2024-07-23 18:22:51.326464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.800 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.326558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.326583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.326706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.326735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.326887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.326912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.327829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.327855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.328950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.328974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.329871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.329895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.330887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.330912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.331031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.331056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.331168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.801 [2024-07-23 18:22:51.331207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.801 qpair failed and we were unable to recover it. 00:34:43.801 [2024-07-23 18:22:51.331362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.331390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.331509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.331535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.331630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.331656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.331803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.331829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.331930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.331957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.332858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.332976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.333949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.333975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.334889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.334914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.335861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.335987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.336134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.336296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.336474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.336620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.802 qpair failed and we were unable to recover it. 00:34:43.802 [2024-07-23 18:22:51.336789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.802 [2024-07-23 18:22:51.336815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.336931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.336955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.337930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.337956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.338897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.338988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339099] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:43.803 [2024-07-23 18:22:51.339139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339181] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.803 [2024-07-23 18:22:51.339288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.339931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.339958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.340866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.340989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.341014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.341159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.341184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.341280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.341308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.341451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.341490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.803 [2024-07-23 18:22:51.341592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.803 [2024-07-23 18:22:51.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.803 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.341714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.341745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.341864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.341889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.342892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.342919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.343912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.343938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.344913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.344938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.345899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.345925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.804 [2024-07-23 18:22:51.346842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.804 [2024-07-23 18:22:51.346868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.804 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.346962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.346986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.347875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.347902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.348902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.349959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.349985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.350853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.350881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.805 qpair failed and we were unable to recover it. 00:34:43.805 [2024-07-23 18:22:51.351657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.805 [2024-07-23 18:22:51.351682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.351781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.351805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.351902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.351928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.352972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.352998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.353903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.353996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.354924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.354949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.355903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.355928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.356902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.356992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.357016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.357113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.357138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.357253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.357278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.357373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.806 [2024-07-23 18:22:51.357399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.806 qpair failed and we were unable to recover it. 00:34:43.806 [2024-07-23 18:22:51.357522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.357547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.357669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.357694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.357811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.357837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.357956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.357981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.358897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.358935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.359936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.359964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.360878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.360904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.361879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.361997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.362912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.362938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.363084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.363109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.807 [2024-07-23 18:22:51.363224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.807 [2024-07-23 18:22:51.363249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.807 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.363364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.363517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.363545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.363642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.363668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.363769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.363795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.363907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.363934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.364887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.364912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.365926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.365951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.366903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.366929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.367910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.367936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.368973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.368999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.369117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.369263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.369288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.369424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.369452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.808 qpair failed and we were unable to recover it. 00:34:43.808 [2024-07-23 18:22:51.369577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.808 [2024-07-23 18:22:51.369603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.369703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.369729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.369843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.369870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.369990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.370859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.370885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.371909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.371934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.372856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.372981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.373916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.373942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.374901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.374926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.809 qpair failed and we were unable to recover it. 00:34:43.809 [2024-07-23 18:22:51.375020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.809 [2024-07-23 18:22:51.375046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.375891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.375991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.376969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.376996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.377853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.377988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.810 qpair failed and we were unable to recover it. 00:34:43.810 [2024-07-23 18:22:51.378923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.810 [2024-07-23 18:22:51.378949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.379857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.379882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.380871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.380896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.811 [2024-07-23 18:22:51.381313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.381951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.381979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.382903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.382996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.383865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.383983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.811 [2024-07-23 18:22:51.384008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.811 qpair failed and we were unable to recover it. 00:34:43.811 [2024-07-23 18:22:51.384100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.384884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.384983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.385928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.385954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.386886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.386911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.387863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.387889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.388012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.812 [2024-07-23 18:22:51.388040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.812 qpair failed and we were unable to recover it. 00:34:43.812 [2024-07-23 18:22:51.388175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.388322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.388457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.388579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.388703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.388850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.388876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.389858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.389886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.390929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.390954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.391885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.391974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.392961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.392987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.393125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.393164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.393289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.393323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.813 qpair failed and we were unable to recover it. 00:34:43.813 [2024-07-23 18:22:51.393447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.813 [2024-07-23 18:22:51.393473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.393594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.393620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.393707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.393733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.393822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.393847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.393934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.393960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.394843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.394870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.395937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.395963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.396911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.396936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.397957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.397982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.398130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.398286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.398434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.398553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.814 [2024-07-23 18:22:51.398671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.814 qpair failed and we were unable to recover it. 00:34:43.814 [2024-07-23 18:22:51.398773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.398803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.398899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.398924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.399957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.399983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.400939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.400966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.401087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.401113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:43.815 [2024-07-23 18:22:51.401271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.815 [2024-07-23 18:22:51.401296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:43.815 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.401440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.401478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.401573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.401599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.401697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.401722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.401844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.401869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.096 [2024-07-23 18:22:51.402716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.096 qpair failed and we were unable to recover it. 00:34:44.096 [2024-07-23 18:22:51.402808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.402835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.402921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.402947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.403897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.403923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.404880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.404984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.405890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.405916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.406955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.406980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.097 [2024-07-23 18:22:51.407822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.097 qpair failed and we were unable to recover it. 00:34:44.097 [2024-07-23 18:22:51.407918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.407943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.408913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.408939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.409883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.409983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.410958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.410982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.411908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.411935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.412956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.412983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.098 [2024-07-23 18:22:51.413079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.098 [2024-07-23 18:22:51.413106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.098 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.413942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.413969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.414859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.414885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.415943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.415969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:44.099 [2024-07-23 18:22:51.416064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.416891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.416989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.417948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.417977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.099 [2024-07-23 18:22:51.418098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.099 [2024-07-23 18:22:51.418124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.099 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.418862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.418887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.419858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.419884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.420943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.420968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.421923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.421950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.422869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.422894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.423022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.100 [2024-07-23 18:22:51.423049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.100 qpair failed and we were unable to recover it. 00:34:44.100 [2024-07-23 18:22:51.423173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.423324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.423470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.423605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.423756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.423912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.423940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.424942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.424967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.425964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.425990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.426856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.426980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.427095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.427121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.427262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.427300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.427414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.101 [2024-07-23 18:22:51.427441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.101 qpair failed and we were unable to recover it. 00:34:44.101 [2024-07-23 18:22:51.427542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.427568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.427666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.427692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.427838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.427864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.427963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.427989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.428904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.428995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.429918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.429943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.430933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.430958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.431970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.431996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.102 [2024-07-23 18:22:51.432812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.102 qpair failed and we were unable to recover it. 00:34:44.102 [2024-07-23 18:22:51.432903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.433870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.433895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.434855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.434881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.435932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.435958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.436970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.436996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.437865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.437890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.438031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.438056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.438153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.103 [2024-07-23 18:22:51.438177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.103 qpair failed and we were unable to recover it. 00:34:44.103 [2024-07-23 18:22:51.438289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.438337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.438446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.438473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.438572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.438598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.438691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.438718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.438877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.438904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.438999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.439929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.439956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.440888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.440914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.441955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.441982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.442906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.442933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.443058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.443086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.443184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.443211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.104 [2024-07-23 18:22:51.443326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.104 [2024-07-23 18:22:51.443371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.104 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.443499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.443525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.443621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.443645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.443762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.443786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.443892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.443919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.444862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.444888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.445947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.445972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.446881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.446995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.447869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.447993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.448020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.448113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.448139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.105 qpair failed and we were unable to recover it. 00:34:44.105 [2024-07-23 18:22:51.448239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.105 [2024-07-23 18:22:51.448266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.448373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.448399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.448484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.448510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.448612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.448636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.448781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.448807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.448928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.448952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.449920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.449946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.450885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.450912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.451889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.451998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.452880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.452908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.106 [2024-07-23 18:22:51.453006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.106 [2024-07-23 18:22:51.453032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.106 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.453872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.453987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.454938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.454964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.455885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.455911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.456893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.456919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.457962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.457990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.458081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.458107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.107 [2024-07-23 18:22:51.458265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.107 [2024-07-23 18:22:51.458290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.107 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.458395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.458421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.458511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.458537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.458659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.458685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.458804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.458829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.458921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.458947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.459859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.459885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.460959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.460984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.461884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.461910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.462928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.463054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.463081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.463176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.463202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.463322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.463349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.108 qpair failed and we were unable to recover it. 00:34:44.108 [2024-07-23 18:22:51.463443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.108 [2024-07-23 18:22:51.463468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.463563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.463588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.463690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.463716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.463814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.463839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.463929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.463956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.464856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.465879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.466872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.466897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.467910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.467936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.109 [2024-07-23 18:22:51.468819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.109 qpair failed and we were unable to recover it. 00:34:44.109 [2024-07-23 18:22:51.468966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.468991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.469956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.469981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.470972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.470998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.471941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.471966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.472894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.472920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.473043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.473069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.473175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.473213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.473393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.110 [2024-07-23 18:22:51.473432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.110 qpair failed and we were unable to recover it. 00:34:44.110 [2024-07-23 18:22:51.473538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.473565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.473658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.473685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.473776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.473801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.473912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.473938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.474925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.475917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.475941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.476875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.476900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.477915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.477943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.478057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.478082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.478210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.478237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.478347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.478386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.478483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.111 [2024-07-23 18:22:51.478511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.111 qpair failed and we were unable to recover it. 00:34:44.111 [2024-07-23 18:22:51.478633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.478659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.478780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.478812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.478900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.478926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.479887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.479913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.480938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.480964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.481879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.481996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.482882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.482909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.112 qpair failed and we were unable to recover it. 00:34:44.112 [2024-07-23 18:22:51.483790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.112 [2024-07-23 18:22:51.483816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.483907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.483937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.484871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.484998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.485892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.485918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.486890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.486915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.487921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.487949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.113 [2024-07-23 18:22:51.488839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.113 qpair failed and we were unable to recover it. 00:34:44.113 [2024-07-23 18:22:51.488933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.488957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.489928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.489956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.490897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.490923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.491883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.491910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.492933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.492958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.493933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.493959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.494104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.114 [2024-07-23 18:22:51.494130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.114 qpair failed and we were unable to recover it. 00:34:44.114 [2024-07-23 18:22:51.494228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.494396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.494537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.494688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.494838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.494958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.494984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.495947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.495971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.496904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.496933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.497916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.497943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.498035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.498061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.498187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.498213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.498336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.498363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.498459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.115 [2024-07-23 18:22:51.498484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.115 qpair failed and we were unable to recover it. 00:34:44.115 [2024-07-23 18:22:51.498608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.498636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.498760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.498785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.498899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.498924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.499891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.499982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.500965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.500994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.501883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.501909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.502907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.502933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.116 [2024-07-23 18:22:51.503808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.116 qpair failed and we were unable to recover it. 00:34:44.116 [2024-07-23 18:22:51.503905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.503932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.504903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.504941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.505860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.505886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.506959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.506986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.507944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.507970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.508867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.508893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.509012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.509038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.117 [2024-07-23 18:22:51.509135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.117 [2024-07-23 18:22:51.509160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.117 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.509928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.509953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.510948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.510975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.511935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.511968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.118 [2024-07-23 18:22:51.512198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.118 [2024-07-23 18:22:51.512212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.118 [2024-07-23 18:22:51.512225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.118 [2024-07-23 18:22:51.512216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.118 [2024-07-23 18:22:51.512253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 [2024-07-23 18:22:51.512461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:44.118 [2024-07-23 18:22:51.512547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:44.118 [2024-07-23 18:22:51.512640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 [2024-07-23 18:22:51.512492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.512907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.512998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.513025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.513116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.513142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.513263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.513289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.513391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.513417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.513509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.118 [2024-07-23 18:22:51.513535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.118 qpair failed and we were unable to recover it. 00:34:44.118 [2024-07-23 18:22:51.513633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.513659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.513757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.513783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.513878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.513902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.513989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.514934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.514959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.515947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.515974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.516921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.516948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.517232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.517391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.517503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.119 [2024-07-23 18:22:51.517620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.119 qpair failed and we were unable to recover it. 00:34:44.119 [2024-07-23 18:22:51.517711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.517737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.517832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.517857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.517950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.517977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.518906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.518931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.519912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.519938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.520965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.520993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.521894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.521919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.522042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.522067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.522163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.522191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.522287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.522314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.522427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.522454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.120 qpair failed and we were unable to recover it. 00:34:44.120 [2024-07-23 18:22:51.522550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.120 [2024-07-23 18:22:51.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.522672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.522697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.522814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.522840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.522931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.522959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.523952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.523978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.524900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.524927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.525916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.525941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.526959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.526985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.527083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.527111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.527235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.527355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.121 [2024-07-23 18:22:51.527385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.121 qpair failed and we were unable to recover it. 00:34:44.121 [2024-07-23 18:22:51.527486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.527512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.527600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.527626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.527720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.527746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.527838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.527864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.527950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.527976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.528767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.528976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.529890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.529916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.530950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.530977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.531883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.531922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.532020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.532048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.532138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.532163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.532247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.532272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.532442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.122 [2024-07-23 18:22:51.532472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.122 qpair failed and we were unable to recover it. 00:34:44.122 [2024-07-23 18:22:51.532569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.532595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.532795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.532821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.532950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.532976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.533912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.533937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.534867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.534983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.535911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.535936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.536921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.536946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.537068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.537095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.537198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.537226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.537348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.537387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.537488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.123 [2024-07-23 18:22:51.537515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.123 qpair failed and we were unable to recover it. 00:34:44.123 [2024-07-23 18:22:51.537610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.537635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.537728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.537758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.537876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.537901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.537992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.538878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.538903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.539937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.539962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.540869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.540990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.541108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.541245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.541483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.541655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.124 [2024-07-23 18:22:51.541770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.124 [2024-07-23 18:22:51.541795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.124 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.541920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.541946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.542925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.542951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.543834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.543874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.544896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.544990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.545894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.545920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.125 [2024-07-23 18:22:51.546907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.125 [2024-07-23 18:22:51.546934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.125 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.547892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.547920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.548856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.548979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.549922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.549949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.550865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.550983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.551943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.551969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.126 [2024-07-23 18:22:51.552060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.126 [2024-07-23 18:22:51.552089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.126 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.552875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.552992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.553898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.553925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.554901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.554926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.555896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.127 qpair failed and we were unable to recover it. 00:34:44.127 [2024-07-23 18:22:51.556960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.127 [2024-07-23 18:22:51.556985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.557913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.557938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.558866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.558990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.559897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.559925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.560969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.560999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.561906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.128 [2024-07-23 18:22:51.561931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.128 qpair failed and we were unable to recover it. 00:34:44.128 [2024-07-23 18:22:51.562033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.562907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.562932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.563932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.563957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.564846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.564973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.565888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.565989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.129 [2024-07-23 18:22:51.566028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.129 qpair failed and we were unable to recover it. 00:34:44.129 [2024-07-23 18:22:51.566124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.566926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.566951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.567949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.567977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.568892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.568985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.569912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.569997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.570902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.570927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.130 [2024-07-23 18:22:51.571048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.130 [2024-07-23 18:22:51.571076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.130 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.571895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.571921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.572934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.572960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.573930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.573955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.574946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.574974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.575946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.575971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.576055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.576081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.576178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.576206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.576296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.131 [2024-07-23 18:22:51.576338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.131 qpair failed and we were unable to recover it. 00:34:44.131 [2024-07-23 18:22:51.576429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.576455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.576550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.576574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.576671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.576695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.576809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.576839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.576935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.576960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.577903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.577995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.578906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.578994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.579889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.579916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.580878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.580906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.132 [2024-07-23 18:22:51.581668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.132 qpair failed and we were unable to recover it. 00:34:44.132 [2024-07-23 18:22:51.581757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.581782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.581919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.581958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.582872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.582898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.583969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.583995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.584945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.584970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.585947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.585972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.586089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.586243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.586374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.586502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-07-23 18:22:51.586651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-07-23 18:22:51.586778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.586804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.586928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.586953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.587909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.587934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.588928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.589945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.589971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.590897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.590923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.591899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.591999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.592116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.592345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.592465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.592597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-07-23 18:22:51.592710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-07-23 18:22:51.592735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.592827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.592855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.592969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.593918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.593944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.594878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.594974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.595887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.595925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.596965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.596992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.597965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.597991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.598080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.598108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.598207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.598237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.598353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.598379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.598471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-07-23 18:22:51.598497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-07-23 18:22:51.598600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.598626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.598742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.598769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.598858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.598884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.598977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.599892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.599999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.600899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.600992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.601921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.601947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.602902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.602927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.603913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.603938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.604028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.604054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.604148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.604173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.604330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.604360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.604459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-07-23 18:22:51.604484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-07-23 18:22:51.604568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.604594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.604721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.604745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.604853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.604878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.604973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.604997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.605897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.605984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.606949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.606974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.607971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.607996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.608854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.608974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.609000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.609090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.609115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.609241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.609267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.609361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.609387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.609467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-07-23 18:22:51.609492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-07-23 18:22:51.609584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.609610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.609705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.609730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.609831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.609870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.609960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.609986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.610969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.610996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.611927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.611952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.612914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.612995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.613878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.613904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.614902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.614926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.615040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.615166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.615193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-07-23 18:22:51.615285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-07-23 18:22:51.615310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.615411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.615438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.615540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.615566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.615656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.615686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.615781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.615806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.615896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.616905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.616993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.617895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.617921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.618873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.618995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.619908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.619935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.620932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.620959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.621055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.621081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.621177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-07-23 18:22:51.621202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-07-23 18:22:51.621287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.621889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.621978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.622852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.622997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.623898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.623923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.624889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.624916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.625931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.625956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.626881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.626976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.627001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.627086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.627111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.627204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-07-23 18:22:51.627229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-07-23 18:22:51.627324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.627352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.627461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.627501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.627602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.627635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.627745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.627773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.627863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.627890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.628918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.628943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.629888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.629978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.630888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.630913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-07-23 18:22:51.631896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-07-23 18:22:51.631995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.632920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.632947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.633846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.633873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.634876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.634907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.635891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.635917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.636012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.636037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-07-23 18:22:51.636119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-07-23 18:22:51.636144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.636880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.636905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.637854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.637978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.638896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.638986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.639899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.639927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.640939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.640964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-07-23 18:22:51.641063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-07-23 18:22:51.641105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.641882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.641975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.642876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.642981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.643908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.643987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.644950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.644975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-07-23 18:22:51.645836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-07-23 18:22:51.645860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.645951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.645976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.646953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.646978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.647941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.647966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.648917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.648942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:44.145 [2024-07-23 18:22:51.649764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.649884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.649911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.650003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.650032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:44.145 [2024-07-23 18:22:51.650125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.650166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.650266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.650291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.650383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.650409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.650510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-07-23 18:22:51.650535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-07-23 18:22:51.650625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.650651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:44.146 [2024-07-23 18:22:51.650742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.650768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.650854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.650878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.650965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.650992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:44.146 [2024-07-23 18:22:51.651192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.651951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.651975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.652931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.652970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.653889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.653988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.654014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.654103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.654129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-07-23 18:22:51.654222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-07-23 18:22:51.654251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.654381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.654526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.654651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.654805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.654912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.654997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.655941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.655969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.656881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.656997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.657874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.657902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.658877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.658904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.659034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.659059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.659148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.659173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.659258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-07-23 18:22:51.659284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-07-23 18:22:51.659421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.659450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.659549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.659575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.659663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.659689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.659780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.659806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.659919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.659945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.660959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.660985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.661916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.661943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.662948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.662975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.663919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-07-23 18:22:51.663944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-07-23 18:22:51.664033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.664929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.664953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.665935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.665961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.666917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.666942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.667937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.667962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-07-23 18:22:51.668764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-07-23 18:22:51.668789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.668877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.668902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.669964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.669989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.670867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.670993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.671909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.671934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.672892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-07-23 18:22:51.672918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-07-23 18:22:51.673031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.673900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.673985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.674901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.674926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.675900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.675988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.151 [2024-07-23 18:22:51.676136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:44.151 [2024-07-23 18:22:51.676601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.676897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.676982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.151 [2024-07-23 18:22:51.677124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.151 [2024-07-23 18:22:51.677429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-07-23 18:22:51.677892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-07-23 18:22:51.677917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.678951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.678976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.679965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.679990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.680941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.680967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.681943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.681970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-07-23 18:22:51.682846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-07-23 18:22:51.682935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.682961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.683953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.683979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.684965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.684991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.685899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.685925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.686974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.686999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.687093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.687120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.687239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.687264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.687362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.687394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.687483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-07-23 18:22:51.687508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-07-23 18:22:51.687597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.687623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.687714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.687739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.687856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.687881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.687981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.688868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.688894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.689967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.689995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.690888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.690916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.691872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.691996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-07-23 18:22:51.692692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-07-23 18:22:51.692816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.692851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.692933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.692959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.693936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.693962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.694938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.694963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.695910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.695935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.696912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.696938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-07-23 18:22:51.697074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-07-23 18:22:51.697099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.697876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.697901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.698948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.698974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.699920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.699947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.700912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.700938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.701958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.701983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.156 qpair failed and we were unable to recover it. 00:34:44.156 [2024-07-23 18:22:51.702077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.156 [2024-07-23 18:22:51.702103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.702962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.702987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.703885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.703979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 Malloc0 00:34:44.157 [2024-07-23 18:22:51.704690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.704923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.704948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.157 [2024-07-23 18:22:51.705038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:44.157 [2024-07-23 18:22:51.705150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.157 [2024-07-23 18:22:51.705296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.157 [2024-07-23 18:22:51.705436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.705580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.705689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.705802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.705913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.705938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.157 [2024-07-23 18:22:51.706701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.157 qpair failed and we were unable to recover it. 00:34:44.157 [2024-07-23 18:22:51.706785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.706809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.706926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.706952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.707896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.707921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.158 [2024-07-23 18:22:51.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.708866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.708891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.709955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.709982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.710881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.710907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.711039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.711161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.711309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.711454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.158 [2024-07-23 18:22:51.711567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-23 18:22:51.711670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.711696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.711817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.711843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.711967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.711992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.712946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.712972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.713862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.713984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.714901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.714996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-23 18:22:51.715753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.159 [2024-07-23 18:22:51.715779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.715902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.715931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.716055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.716082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.716174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.716203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.160 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:44.160 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.160 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.160 [2024-07-23 18:22:51.717057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.717895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.717980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.718881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.718975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.719896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.719924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.720858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.720883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.721006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.721032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.721129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.160 [2024-07-23 18:22:51.721154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-23 18:22:51.721278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.721420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.721549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.721712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.721859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.721973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.721997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.722957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.722982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.723929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.723954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.724046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.724187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.724342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.724470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.724598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.161 [2024-07-23 18:22:51.724718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.161 [2024-07-23 18:22:51.724844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.161 [2024-07-23 18:22:51.724969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.724997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.161 [2024-07-23 18:22:51.725120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.161 qpair failed and we were unable to recover it. 00:34:44.161 [2024-07-23 18:22:51.725881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.161 [2024-07-23 18:22:51.725908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.725997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.726869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.726979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.727957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.727983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b3f40 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.728946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.728983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6330000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.729908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.729937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.162 [2024-07-23 18:22:51.730683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.162 [2024-07-23 18:22:51.730713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.162 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.730810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.730837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.730926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.730952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.731945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.731970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.163 [2024-07-23 18:22:51.732633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 [2024-07-23 18:22:51.732721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.163 [2024-07-23 18:22:51.732837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.163 [2024-07-23 18:22:51.732864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.163 qpair failed and we were unable to recover it. 00:34:44.163 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.422 [2024-07-23 18:22:51.732958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.732984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.422 [2024-07-23 18:22:51.733073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.733834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.733874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.422 [2024-07-23 18:22:51.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.422 qpair failed and we were unable to recover it. 00:34:44.422 [2024-07-23 18:22:51.734851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.734881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.734979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.735928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.735953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6328000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.736044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.736072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.736155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.736181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.736269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.736296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.736394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.423 [2024-07-23 18:22:51.736421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6320000b90 with addr=10.0.0.2, port=4420 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.736798] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.423 [2024-07-23 18:22:51.739096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.739217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.739245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.739262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.739275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.739310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.423 18:22:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2504797 00:34:44.423 [2024-07-23 18:22:51.748929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.749026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.749053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.749068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.749081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.749111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.758970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.759062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.759089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.759104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.759117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.759146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.768995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.769138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.769165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.769179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.769192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.769221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.779075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.779175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.779202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.779217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.779230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.779259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.788957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.789045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.789071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.789086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.789098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.789128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.423 [2024-07-23 18:22:51.799026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.423 [2024-07-23 18:22:51.799120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.423 [2024-07-23 18:22:51.799146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.423 [2024-07-23 18:22:51.799160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.423 [2024-07-23 18:22:51.799173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.423 [2024-07-23 18:22:51.799202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.423 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.808994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.809088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.809114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.809129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.809143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.424 [2024-07-23 18:22:51.809173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.819007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.819097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.819124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.819139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.819152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:44.424 [2024-07-23 18:22:51.819182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.829050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.829188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.829218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.829240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.829254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.829285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.839059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.839156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.839183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.839198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.839211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.839240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.849110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.849213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.849240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.849254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.849267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.849295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.859136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.859234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.859260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.859274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.859287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.859323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.869267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.869375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.869402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.869416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.869430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.869457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.879331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.879442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.879468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.879482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.879495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.879524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.889212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.889309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.889342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.889357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.889371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.889398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.899256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.899358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.899384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.899398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.899412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.899441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.909302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.909400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.909426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.909441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.909453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.909482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.919346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.919435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.919471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.919488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.919502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.919532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.929331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.424 [2024-07-23 18:22:51.929437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.424 [2024-07-23 18:22:51.929463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.424 [2024-07-23 18:22:51.929478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.424 [2024-07-23 18:22:51.929491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.424 [2024-07-23 18:22:51.929518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.424 qpair failed and we were unable to recover it. 00:34:44.424 [2024-07-23 18:22:51.939419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.939514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.939541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.939555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.939568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.939597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.949451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.949549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.949575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.949589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.949602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.949630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.959466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.959556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.959585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.959611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.959624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.959653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.969432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.969528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.969554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.969568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.969581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.969609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.979579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.979674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.979700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.979714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.979727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.979754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.989511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.989600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.989625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.989639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.989652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.989679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:51.999715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:51.999801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:51.999827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:51.999841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:51.999854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:51.999882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.009601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.009704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.009734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.009749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.009761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.009790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.019646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.019743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.019768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.019782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.019795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.019822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.029628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.029716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.029741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.029755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.029768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.029796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.039749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.039838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.039863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.039877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.039890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.039918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.049681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.049796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.049823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.049838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.049852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.049886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-07-23 18:22:52.059692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.425 [2024-07-23 18:22:52.059810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.425 [2024-07-23 18:22:52.059836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.425 [2024-07-23 18:22:52.059850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.425 [2024-07-23 18:22:52.059862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.425 [2024-07-23 18:22:52.059891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.426 [2024-07-23 18:22:52.069708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.426 [2024-07-23 18:22:52.069827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.426 [2024-07-23 18:22:52.069854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.426 [2024-07-23 18:22:52.069868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.426 [2024-07-23 18:22:52.069880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.426 [2024-07-23 18:22:52.069909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-07-23 18:22:52.079788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.426 [2024-07-23 18:22:52.079876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.426 [2024-07-23 18:22:52.079907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.426 [2024-07-23 18:22:52.079924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.426 [2024-07-23 18:22:52.079945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.426 [2024-07-23 18:22:52.079978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.089809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.089909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.089934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.089949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.089961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.089991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.099815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.099909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.099940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.099955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.099968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.099996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.109837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.109957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.109982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.109997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.110009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.110038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.119828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.119918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.119943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.119957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.119970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.119997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.129899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.129993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.130019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.130033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.130045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.130073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.139913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.140007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.140033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.140047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.140060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.140092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.149981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.150121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.150147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.150162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.150174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.150203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.159960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.160050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.160075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.160089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.160102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.160130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.170025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.170119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.170145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.170159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.170172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.170200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.180051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.180146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.180171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.180185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.180197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.180226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.190054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.684 [2024-07-23 18:22:52.190148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.684 [2024-07-23 18:22:52.190178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.684 [2024-07-23 18:22:52.190194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.684 [2024-07-23 18:22:52.190206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.684 [2024-07-23 18:22:52.190234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.684 qpair failed and we were unable to recover it. 00:34:44.684 [2024-07-23 18:22:52.200082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.200217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.200242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.200256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.200268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.200296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.210157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.210254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.210280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.210294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.210307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.210342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.220184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.220278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.220304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.220325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.220341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.220369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.230162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.230258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.230287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.230302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.230315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.230360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.240210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.240311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.240343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.240357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.240370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.240400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.250205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.250297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.250330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.250346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.250359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.250387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.260280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.260405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.260430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.260445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.260457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.260485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.270365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.270455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.270480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.270494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.270506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.270535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.280321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.280412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.280442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.280457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.280469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.280497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.290339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.290437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.290462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.290476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.290488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.290516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.300362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.300452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.300477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.300491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.300504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.300533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.310398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.310483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.310508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.310522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.310535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.310563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.320432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.320521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.320545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.320560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.685 [2024-07-23 18:22:52.320578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.685 [2024-07-23 18:22:52.320606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.685 qpair failed and we were unable to recover it. 00:34:44.685 [2024-07-23 18:22:52.330517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.685 [2024-07-23 18:22:52.330659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.685 [2024-07-23 18:22:52.330683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.685 [2024-07-23 18:22:52.330700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.686 [2024-07-23 18:22:52.330713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.686 [2024-07-23 18:22:52.330741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.686 qpair failed and we were unable to recover it. 00:34:44.686 [2024-07-23 18:22:52.340494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.686 [2024-07-23 18:22:52.340582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.686 [2024-07-23 18:22:52.340607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.686 [2024-07-23 18:22:52.340621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.686 [2024-07-23 18:22:52.340633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.686 [2024-07-23 18:22:52.340661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.686 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.350519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.350638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.350665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.350680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.350693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.350721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.360572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.360684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.360710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.360724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.360737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.360766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.370615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.370715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.370740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.370754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.370766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.370795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.380608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.380736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.380761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.380775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.380787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.380816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.390633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.390723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.390748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.390763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.390775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.390803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.400696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.400816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.400841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.400855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.400868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.400896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.410780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.410876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.410901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.410915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.410933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.410961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.420846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.420942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.420966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.420981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.420994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.421022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.430767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.430862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.430888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.430902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.430914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.430942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.440827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.440923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.440948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.440962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.440974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.441002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.450827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.450923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.450951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.450966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.450979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.451007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.460838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.460938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.460964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.460978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.460991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.461018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.470847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.470938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.470963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.470978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.470990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.471018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.480917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.481011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.481036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.481050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.481062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.481090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.490905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.491048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.491073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.491087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.491100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.491128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.500955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.501075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.501100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.501114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.501132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.501161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.510959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.511059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.511085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.511099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.511111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.511139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.520976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.521066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.521091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.521104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.521117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.521145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.531128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.531242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.531267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.531281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.531294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.531332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.541079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.541173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.541198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.541212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.541225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.541252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.551119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.551210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.551236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.551250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.551263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.551290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.561126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.561216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.561241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.561255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.561268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.561295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.571159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.571273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.942 [2024-07-23 18:22:52.571299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.942 [2024-07-23 18:22:52.571313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.942 [2024-07-23 18:22:52.571333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.942 [2024-07-23 18:22:52.571363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.942 qpair failed and we were unable to recover it. 00:34:44.942 [2024-07-23 18:22:52.581225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.942 [2024-07-23 18:22:52.581325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.943 [2024-07-23 18:22:52.581351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.943 [2024-07-23 18:22:52.581365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.943 [2024-07-23 18:22:52.581378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.943 [2024-07-23 18:22:52.581405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.943 qpair failed and we were unable to recover it. 00:34:44.943 [2024-07-23 18:22:52.591221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.943 [2024-07-23 18:22:52.591309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.943 [2024-07-23 18:22:52.591344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.943 [2024-07-23 18:22:52.591365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.943 [2024-07-23 18:22:52.591379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.943 [2024-07-23 18:22:52.591407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.943 qpair failed and we were unable to recover it. 00:34:44.943 [2024-07-23 18:22:52.601237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.943 [2024-07-23 18:22:52.601338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.943 [2024-07-23 18:22:52.601367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.943 [2024-07-23 18:22:52.601382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.943 [2024-07-23 18:22:52.601396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:44.943 [2024-07-23 18:22:52.601427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.943 qpair failed and we were unable to recover it. 00:34:45.199 [2024-07-23 18:22:52.611257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.199 [2024-07-23 18:22:52.611351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.199 [2024-07-23 18:22:52.611388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.199 [2024-07-23 18:22:52.611403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.199 [2024-07-23 18:22:52.611416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.199 [2024-07-23 18:22:52.611447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.199 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.621289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.621394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.621420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.621434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.621447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.621478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.631303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.631420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.631445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.631459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.631472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.631500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.641365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.641452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.641478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.641493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.641507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.641535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.651376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.651512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.651537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.651552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.651565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.651594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.661388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.661487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.661513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.661527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.661540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.661568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.671453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.671549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.671575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.671592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.671605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.671634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.681421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.681522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.681548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.681568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.681581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.681609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.691486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.691620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.691646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.691660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.691673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.691700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.701523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.701618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.701644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.701658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.701671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.701699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.711546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.711628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.711654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.711668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.711680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.711708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.721526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.721612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.721638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.721652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.721665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.721692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.731601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.731699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.731725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.731739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.731752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.731779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.741634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.741728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.741753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.200 [2024-07-23 18:22:52.741767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.200 [2024-07-23 18:22:52.741780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.200 [2024-07-23 18:22:52.741808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.200 qpair failed and we were unable to recover it. 00:34:45.200 [2024-07-23 18:22:52.751659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.200 [2024-07-23 18:22:52.751751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.200 [2024-07-23 18:22:52.751776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.751790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.751803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.751831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.761684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.761774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.761800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.761814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.761827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.761855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.771736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.771834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.771860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.771882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.771896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.771925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.781741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.781868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.781894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.781908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.781921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.781948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.791764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.791855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.791880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.791894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.791907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.791935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.801773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.801864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.801889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.801903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.801916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.801944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.811856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.811968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.811993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.812007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.812020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.812047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.821845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.821939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.821965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.821979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.821992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.822020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.831870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.831960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.831985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.832000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.832013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.832041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.841909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.842042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.842067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.842081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.842094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.842122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.201 [2024-07-23 18:22:52.851975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.201 [2024-07-23 18:22:52.852084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.201 [2024-07-23 18:22:52.852110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.201 [2024-07-23 18:22:52.852124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.201 [2024-07-23 18:22:52.852137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.201 [2024-07-23 18:22:52.852165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.201 qpair failed and we were unable to recover it. 00:34:45.459 [2024-07-23 18:22:52.861975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.459 [2024-07-23 18:22:52.862120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.459 [2024-07-23 18:22:52.862146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.459 [2024-07-23 18:22:52.862167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.459 [2024-07-23 18:22:52.862181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.459 [2024-07-23 18:22:52.862210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.459 qpair failed and we were unable to recover it. 00:34:45.459 [2024-07-23 18:22:52.872011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.459 [2024-07-23 18:22:52.872123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.459 [2024-07-23 18:22:52.872149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.459 [2024-07-23 18:22:52.872163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.459 [2024-07-23 18:22:52.872176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.459 [2024-07-23 18:22:52.872205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.459 qpair failed and we were unable to recover it. 00:34:45.459 [2024-07-23 18:22:52.882033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.459 [2024-07-23 18:22:52.882135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.459 [2024-07-23 18:22:52.882161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.459 [2024-07-23 18:22:52.882175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.459 [2024-07-23 18:22:52.882188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.459 [2024-07-23 18:22:52.882217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.459 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.892084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.892194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.892220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.892234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.892246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.892275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.902091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.902206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.902233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.902247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.902260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.902289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.912124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.912218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.912243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.912257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.912270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.912298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.922137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.922259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.922285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.922299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.922312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.922349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.932188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.932287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.932312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.932338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.932352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.932381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.942246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.942357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.942383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.942397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.942409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.942439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.952213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.952313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.952350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.952365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.952378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.952406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.962245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.962345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.962371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.962385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.962399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.962428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.972301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.972465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.972491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.972506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.972519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.972547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.982296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.982413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.982439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.982453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.982467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.982496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:52.992340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:52.992432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:52.992458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:52.992472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:52.992485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:52.992519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:53.002361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:53.002448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:53.002474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:53.002488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:53.002502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:53.002531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:53.012403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.460 [2024-07-23 18:22:53.012499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.460 [2024-07-23 18:22:53.012524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.460 [2024-07-23 18:22:53.012538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.460 [2024-07-23 18:22:53.012551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.460 [2024-07-23 18:22:53.012580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.460 qpair failed and we were unable to recover it. 00:34:45.460 [2024-07-23 18:22:53.022460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.022560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.022588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.022602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.022615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.022643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.032456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.032543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.032568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.032583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.032596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.032624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.042496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.042627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.042658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.042673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.042687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.042715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.052506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.052598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.052628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.052642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.052655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.052683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.062525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.062618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.062643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.062656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.062669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.062696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.072580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.072708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.072732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.072746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.072759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.072788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.082578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.082669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.082694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.082708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.082721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.082754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.092634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.092743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.092768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.092781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.092794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.092822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.102657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.102776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.102801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.102815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.102827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.102855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.461 [2024-07-23 18:22:53.112693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.461 [2024-07-23 18:22:53.112790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.461 [2024-07-23 18:22:53.112815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.461 [2024-07-23 18:22:53.112829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.461 [2024-07-23 18:22:53.112842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.461 [2024-07-23 18:22:53.112870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.461 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.122711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.122807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.122833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.122849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.122871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.122905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.132816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.132959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.132990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.133005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.133018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.133046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.142754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.142843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.142868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.142882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.142894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.142922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.152844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.152942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.152967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.152981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.152994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.153021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.162849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.162948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.162973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.162987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.163000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.163030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.172908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.173032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.173057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.173071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.173083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.173117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.182932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.183042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.183067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.183081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.183094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.183122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.192920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.193017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.193044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.193058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.193071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.193099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.202946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.719 [2024-07-23 18:22:53.203076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.719 [2024-07-23 18:22:53.203102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.719 [2024-07-23 18:22:53.203116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.719 [2024-07-23 18:22:53.203129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.719 [2024-07-23 18:22:53.203157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.719 qpair failed and we were unable to recover it. 00:34:45.719 [2024-07-23 18:22:53.212978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.213082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.213107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.213121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.213134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.213162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.222984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.223071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.223101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.223116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.223128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.223156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.233022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.233112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.233137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.233151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.233164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.233191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.243056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.243150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.243175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.243189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.243202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.243229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.253088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.253193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.253219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.253233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.253246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.253274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.263080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.263175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.263200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.263215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.263233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.263262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.273114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.273204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.273230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.273245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.273257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.273285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.283128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.283219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.283244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.283259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.283273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.283302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.293175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.293304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.293339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.293355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.293367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.293395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.303187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.303282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.303307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.303328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.303342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.303371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.313207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.313301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.313333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.313348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.313360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.313388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.323277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.323383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.323408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.323422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.323434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.323464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.333346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.333453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.333478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.720 [2024-07-23 18:22:53.333492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.720 [2024-07-23 18:22:53.333504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.720 [2024-07-23 18:22:53.333533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.720 qpair failed and we were unable to recover it. 00:34:45.720 [2024-07-23 18:22:53.343298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.720 [2024-07-23 18:22:53.343394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.720 [2024-07-23 18:22:53.343418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.721 [2024-07-23 18:22:53.343431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.721 [2024-07-23 18:22:53.343443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.721 [2024-07-23 18:22:53.343470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.721 qpair failed and we were unable to recover it. 00:34:45.721 [2024-07-23 18:22:53.353355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.721 [2024-07-23 18:22:53.353453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.721 [2024-07-23 18:22:53.353479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.721 [2024-07-23 18:22:53.353493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.721 [2024-07-23 18:22:53.353511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.721 [2024-07-23 18:22:53.353541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.721 qpair failed and we were unable to recover it. 00:34:45.721 [2024-07-23 18:22:53.363395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.721 [2024-07-23 18:22:53.363520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.721 [2024-07-23 18:22:53.363545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.721 [2024-07-23 18:22:53.363559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.721 [2024-07-23 18:22:53.363574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.721 [2024-07-23 18:22:53.363601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.721 qpair failed and we were unable to recover it. 00:34:45.721 [2024-07-23 18:22:53.373454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.721 [2024-07-23 18:22:53.373570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.721 [2024-07-23 18:22:53.373595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.721 [2024-07-23 18:22:53.373609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.721 [2024-07-23 18:22:53.373623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.721 [2024-07-23 18:22:53.373653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.721 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.383412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.383503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.383529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.383543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.383556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.383584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.393440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.393525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.393551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.393567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.393580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.393608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.403496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.403593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.403619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.403634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.403646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.403676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.413507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.413599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.413624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.413638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.413651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.413679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.423524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.423627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.423653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.423667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.423680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.423709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.433586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.433715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.433740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.433754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.433767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.433795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.443609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.443696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.443720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.443735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.443753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.443781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.453646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.453756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.453781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.453795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.453808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.453838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.979 [2024-07-23 18:22:53.463651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.979 [2024-07-23 18:22:53.463736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.979 [2024-07-23 18:22:53.463761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.979 [2024-07-23 18:22:53.463775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.979 [2024-07-23 18:22:53.463787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.979 [2024-07-23 18:22:53.463815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.979 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.473722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.473816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.473841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.473856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.473868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.473897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.483739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.483831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.483856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.483871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.483883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.483911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.493753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.493854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.493878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.493892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.493905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.493933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.503841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.503954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.503980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.503994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.504007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.504035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.513793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.513886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.513912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.513934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.513948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.513976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.523813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.523903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.523928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.523943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.523956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.523983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.533895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.533990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.534015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.534035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.534049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.534077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.543899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.543990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.544016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.544030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.544043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.544071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.553906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.553997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.554021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.554035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.554048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.554076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.563960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.564056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.564083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.564098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.564111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.564139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.573976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.574069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.574095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.574109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.574122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.574150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.583985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.584083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.584108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.584122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.584136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.584163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.594024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.594113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.980 [2024-07-23 18:22:53.594138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.980 [2024-07-23 18:22:53.594151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.980 [2024-07-23 18:22:53.594164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.980 [2024-07-23 18:22:53.594192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.980 qpair failed and we were unable to recover it. 00:34:45.980 [2024-07-23 18:22:53.604035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.980 [2024-07-23 18:22:53.604121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.981 [2024-07-23 18:22:53.604148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.981 [2024-07-23 18:22:53.604162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.981 [2024-07-23 18:22:53.604175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.981 [2024-07-23 18:22:53.604204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.981 qpair failed and we were unable to recover it. 00:34:45.981 [2024-07-23 18:22:53.614110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.981 [2024-07-23 18:22:53.614207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.981 [2024-07-23 18:22:53.614232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.981 [2024-07-23 18:22:53.614246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.981 [2024-07-23 18:22:53.614261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.981 [2024-07-23 18:22:53.614289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.981 qpair failed and we were unable to recover it. 00:34:45.981 [2024-07-23 18:22:53.624121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.981 [2024-07-23 18:22:53.624210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.981 [2024-07-23 18:22:53.624236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.981 [2024-07-23 18:22:53.624256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.981 [2024-07-23 18:22:53.624269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.981 [2024-07-23 18:22:53.624299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.981 qpair failed and we were unable to recover it. 00:34:45.981 [2024-07-23 18:22:53.634107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.981 [2024-07-23 18:22:53.634251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.981 [2024-07-23 18:22:53.634276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.981 [2024-07-23 18:22:53.634290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.981 [2024-07-23 18:22:53.634303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:45.981 [2024-07-23 18:22:53.634349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.981 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.644139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.644236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.239 [2024-07-23 18:22:53.644262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.239 [2024-07-23 18:22:53.644277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.239 [2024-07-23 18:22:53.644290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.239 [2024-07-23 18:22:53.644327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.239 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.654214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.654306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.239 [2024-07-23 18:22:53.654338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.239 [2024-07-23 18:22:53.654352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.239 [2024-07-23 18:22:53.654364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.239 [2024-07-23 18:22:53.654394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.239 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.664230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.664352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.239 [2024-07-23 18:22:53.664378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.239 [2024-07-23 18:22:53.664392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.239 [2024-07-23 18:22:53.664404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.239 [2024-07-23 18:22:53.664434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.239 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.674222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.674333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.239 [2024-07-23 18:22:53.674359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.239 [2024-07-23 18:22:53.674373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.239 [2024-07-23 18:22:53.674386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.239 [2024-07-23 18:22:53.674414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.239 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.684269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.684370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.239 [2024-07-23 18:22:53.684396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.239 [2024-07-23 18:22:53.684410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.239 [2024-07-23 18:22:53.684423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.239 [2024-07-23 18:22:53.684451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.239 qpair failed and we were unable to recover it. 00:34:46.239 [2024-07-23 18:22:53.694374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.239 [2024-07-23 18:22:53.694518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.694543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.694558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.694570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.694598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.704330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.704424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.704449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.704463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.704476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.704505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.714350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.714435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.714460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.714480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.714494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.714522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.724409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.724504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.724529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.724543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.724556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.724584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.734426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.734532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.734557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.734571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.734583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.734612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.744487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.744603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.744628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.744643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.744655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.744683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.754502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.754621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.754646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.754660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.754673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.754703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.764540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.764651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.764676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.764690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.764703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.764731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.774605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.774708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.774732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.774746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.774759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.774786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.784602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.784699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.784724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.784738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.784751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.784779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.794612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.794721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.794747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.794760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.794773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.794800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.804602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.804702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.804731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.804752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.804766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.804796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.814681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.814822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.814849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.814863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.814876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.240 [2024-07-23 18:22:53.814905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.240 qpair failed and we were unable to recover it. 00:34:46.240 [2024-07-23 18:22:53.824651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.240 [2024-07-23 18:22:53.824750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.240 [2024-07-23 18:22:53.824775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.240 [2024-07-23 18:22:53.824789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.240 [2024-07-23 18:22:53.824803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.824831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.834687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.834820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.834845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.834859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.834872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.834900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.844714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.844806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.844832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.844845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.844858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.844887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.854744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.854859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.854884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.854898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.854911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.854940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.864786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.864877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.864903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.864917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.864929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.864957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.874802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.874898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.874924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.874938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.874951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.874979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.884822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.884909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.884935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.884949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.884962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.884990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.241 [2024-07-23 18:22:53.894874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.241 [2024-07-23 18:22:53.894994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.241 [2024-07-23 18:22:53.895024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.241 [2024-07-23 18:22:53.895039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.241 [2024-07-23 18:22:53.895054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.241 [2024-07-23 18:22:53.895084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.241 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-23 18:22:53.904955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-23 18:22:53.905072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-23 18:22:53.905099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-23 18:22:53.905113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-23 18:22:53.905126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.499 [2024-07-23 18:22:53.905155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-23 18:22:53.914907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.499 [2024-07-23 18:22:53.914997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.499 [2024-07-23 18:22:53.915024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.499 [2024-07-23 18:22:53.915038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.499 [2024-07-23 18:22:53.915051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.499 [2024-07-23 18:22:53.915079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.499 qpair failed and we were unable to recover it. 00:34:46.499 [2024-07-23 18:22:53.924971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.925107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.925133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.925147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.925162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.925191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.935000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.935099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.935124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.935138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.935151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.935179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.945042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.945140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.945165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.945179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.945192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.945220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.955032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.955164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.955189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.955203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.955216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.955243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.965049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.965137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.965162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.965175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.965188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.965216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.975148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.975242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.975267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.975281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.975293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.975327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.985118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.985211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.985241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.985256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.985269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.985297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:53.995152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:53.995276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:53.995302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:53.995323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:53.995339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:53.995367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.005205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.005323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.005349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.005363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:54.005375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:54.005404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.015217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.015311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.015346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.015361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:54.015374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:54.015402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.025223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.025323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.025350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.025364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:54.025377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:54.025412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.035297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.035391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.035417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.035431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:54.035444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:54.035473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.045324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.045419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.045449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.045464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.500 [2024-07-23 18:22:54.045477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.500 [2024-07-23 18:22:54.045508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.500 qpair failed and we were unable to recover it. 00:34:46.500 [2024-07-23 18:22:54.055362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.500 [2024-07-23 18:22:54.055459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.500 [2024-07-23 18:22:54.055485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.500 [2024-07-23 18:22:54.055499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.055512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.055540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.065367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.065482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.065507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.065521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.065534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.065562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.075428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.075515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.075545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.075561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.075574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.075601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.085445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.085536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.085561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.085575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.085588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.085616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.095432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.095525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.095551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.095565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.095578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.095606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.105472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.105563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.105588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.105602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.105615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.105643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.115503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.115595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.115620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.115635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.115648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.115680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.125509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.125605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.125631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.125645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.125658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.125685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.135564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.135662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.135690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.135705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.135718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.135746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.145598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.145724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.145750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.145763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.145775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.145803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.501 [2024-07-23 18:22:54.155623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.501 [2024-07-23 18:22:54.155742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.501 [2024-07-23 18:22:54.155768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.501 [2024-07-23 18:22:54.155783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.501 [2024-07-23 18:22:54.155796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.501 [2024-07-23 18:22:54.155824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.501 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.165621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.165723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.165755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.165770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.165785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.165813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.175680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.175815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.175841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.175855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.175868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.175897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.185712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.185802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.185827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.185841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.185854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.185881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.195768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.195865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.195893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.195907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.195921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.195949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.205806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.205909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.205938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.205953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.205972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.206002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.215813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.215913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.215940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.215954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.215969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.215999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.225799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.225888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.225913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.225928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.225941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.225971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.235849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.235936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.235962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.235976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.235989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.236018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.245863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.760 [2024-07-23 18:22:54.245964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.760 [2024-07-23 18:22:54.245989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.760 [2024-07-23 18:22:54.246003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.760 [2024-07-23 18:22:54.246016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.760 [2024-07-23 18:22:54.246044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.760 qpair failed and we were unable to recover it. 00:34:46.760 [2024-07-23 18:22:54.255926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.256050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.256081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.256098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.256111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.256139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.265950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.266066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.266091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.266105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.266118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.266146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.275938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.276028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.276053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.276067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.276080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.276108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.285976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.286063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.286088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.286102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.286115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.286143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.296034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.296126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.296154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.296169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.296187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.296216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.306029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.306118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.306143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.306157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.306170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.306200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.316065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.316156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.316182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.316196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.316209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.316237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.326099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.326187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.326213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.326228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.326240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.326269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.336111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.336204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.336229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.336243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.336256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.336284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.346171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.346328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.346352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.346366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.346378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.346405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.356200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.356325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.356351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.356365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.356378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.356407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.366181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.366271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.366297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.366311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.366335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.366364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.376246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.376354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.376379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.761 [2024-07-23 18:22:54.376393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.761 [2024-07-23 18:22:54.376405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.761 [2024-07-23 18:22:54.376443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.761 qpair failed and we were unable to recover it. 00:34:46.761 [2024-07-23 18:22:54.386267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.761 [2024-07-23 18:22:54.386390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.761 [2024-07-23 18:22:54.386415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.762 [2024-07-23 18:22:54.386429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.762 [2024-07-23 18:22:54.386447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.762 [2024-07-23 18:22:54.386476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.762 qpair failed and we were unable to recover it. 00:34:46.762 [2024-07-23 18:22:54.396303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.762 [2024-07-23 18:22:54.396419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.762 [2024-07-23 18:22:54.396444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.762 [2024-07-23 18:22:54.396458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.762 [2024-07-23 18:22:54.396471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.762 [2024-07-23 18:22:54.396499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.762 qpair failed and we were unable to recover it. 00:34:46.762 [2024-07-23 18:22:54.406298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.762 [2024-07-23 18:22:54.406398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.762 [2024-07-23 18:22:54.406423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.762 [2024-07-23 18:22:54.406437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.762 [2024-07-23 18:22:54.406450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.762 [2024-07-23 18:22:54.406477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.762 qpair failed and we were unable to recover it. 00:34:46.762 [2024-07-23 18:22:54.416346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.762 [2024-07-23 18:22:54.416444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.762 [2024-07-23 18:22:54.416469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.762 [2024-07-23 18:22:54.416484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.762 [2024-07-23 18:22:54.416497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:46.762 [2024-07-23 18:22:54.416526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.762 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-23 18:22:54.426366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-23 18:22:54.426487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-23 18:22:54.426513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-23 18:22:54.426528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-23 18:22:54.426540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.020 [2024-07-23 18:22:54.426569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-23 18:22:54.436406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-23 18:22:54.436504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-23 18:22:54.436530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-23 18:22:54.436545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-23 18:22:54.436558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.020 [2024-07-23 18:22:54.436587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-23 18:22:54.446430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-23 18:22:54.446521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-23 18:22:54.446551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-23 18:22:54.446567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-23 18:22:54.446580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.020 [2024-07-23 18:22:54.446609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-23 18:22:54.456522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.020 [2024-07-23 18:22:54.456611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.020 [2024-07-23 18:22:54.456637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.020 [2024-07-23 18:22:54.456651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.020 [2024-07-23 18:22:54.456666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.020 [2024-07-23 18:22:54.456695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.020 qpair failed and we were unable to recover it. 00:34:47.020 [2024-07-23 18:22:54.466520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.466612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.466638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.466652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.466665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.466693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.476558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.476685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.476710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.476725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.476743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.476772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.486560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.486656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.486681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.486696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.486709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.486737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.496587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.496683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.496708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.496722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.496735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.496763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.506624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.506721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.506747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.506761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.506774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.506802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.516627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.516724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.516749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.516763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.516776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.516804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.526720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.526808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.526833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.526848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.526861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.526889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.536676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.536810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.536838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.536853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.536868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.536897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.546725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.546824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.546849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.546863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.546876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.546904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.556783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.556877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.556901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.556916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.556928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.556958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.566802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.566897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.566922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.566942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.566955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.566983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.576805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.576901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.576926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.576940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.576953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.576981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.586806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.586916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.586941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.586955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.021 [2024-07-23 18:22:54.586967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.021 [2024-07-23 18:22:54.586995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.021 qpair failed and we were unable to recover it. 00:34:47.021 [2024-07-23 18:22:54.596869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.021 [2024-07-23 18:22:54.596958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.021 [2024-07-23 18:22:54.596982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.021 [2024-07-23 18:22:54.596996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.597008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.597036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.606927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.607061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.607085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.607099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.607114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.607141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.616934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.617027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.617051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.617065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.617078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.617106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.626981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.627091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.627116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.627130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.627142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.627171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.637002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.637100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.637125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.637139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.637152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.637179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.647016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.647101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.647128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.647142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.647155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.647182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.657147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.657293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.657332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.657355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.657369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.657400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.667130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.667240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.667269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.667284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.667297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.667335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.022 [2024-07-23 18:22:54.677162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.022 [2024-07-23 18:22:54.677288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.022 [2024-07-23 18:22:54.677315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.022 [2024-07-23 18:22:54.677338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.022 [2024-07-23 18:22:54.677350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.022 [2024-07-23 18:22:54.677378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.022 qpair failed and we were unable to recover it. 00:34:47.280 [2024-07-23 18:22:54.687152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.280 [2024-07-23 18:22:54.687238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.280 [2024-07-23 18:22:54.687264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.280 [2024-07-23 18:22:54.687277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.280 [2024-07-23 18:22:54.687290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.280 [2024-07-23 18:22:54.687326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.280 qpair failed and we were unable to recover it. 00:34:47.280 [2024-07-23 18:22:54.697191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.280 [2024-07-23 18:22:54.697284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.280 [2024-07-23 18:22:54.697310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.280 [2024-07-23 18:22:54.697332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.280 [2024-07-23 18:22:54.697345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.280 [2024-07-23 18:22:54.697374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.280 qpair failed and we were unable to recover it. 00:34:47.280 [2024-07-23 18:22:54.707213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.280 [2024-07-23 18:22:54.707335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.280 [2024-07-23 18:22:54.707360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.707374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.707387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.707415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.717254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.717351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.717376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.717390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.717403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.717431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.727276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.727367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.727393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.727407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.727420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.727448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.737323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.737437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.737462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.737476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.737490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.737518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.747367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.747460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.747485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.747533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.747547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.747577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.757401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.757492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.757517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.757531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.757544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.757571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.767368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.767473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.767498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.767512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.767525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.767553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.777474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.777570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.777595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.777609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.777622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.777649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.787440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.787532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.787557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.787571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.787584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.787612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.797563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.797655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.797681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.797695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.797708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.797736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.807498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.807635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.807660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.807674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.807687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.807715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.817563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.817656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.817682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.817695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.817708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.817736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.827653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.827742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.827767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.827781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.827793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.827821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.281 qpair failed and we were unable to recover it. 00:34:47.281 [2024-07-23 18:22:54.837626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.281 [2024-07-23 18:22:54.837744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.281 [2024-07-23 18:22:54.837774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.281 [2024-07-23 18:22:54.837790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.281 [2024-07-23 18:22:54.837803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.281 [2024-07-23 18:22:54.837831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.847598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.847690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.847715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.847730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.847742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.847770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.857648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.857740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.857766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.857780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.857793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.857821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.867688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.867780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.867805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.867819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.867832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.867861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.877678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.877768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.877793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.877808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.877820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.877849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.887743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.887873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.887898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.887911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.887924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.887952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.897823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.897916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.897941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.897955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.897967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.897995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.907829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.907947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.907972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.907986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.907998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.908026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.917800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.917893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.917917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.917931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.917943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.917971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.927830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.927922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.927951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.927966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.927979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.928006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.282 [2024-07-23 18:22:54.937893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.282 [2024-07-23 18:22:54.938018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.282 [2024-07-23 18:22:54.938044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.282 [2024-07-23 18:22:54.938058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.282 [2024-07-23 18:22:54.938071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.282 [2024-07-23 18:22:54.938099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.282 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-23 18:22:54.947917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.540 [2024-07-23 18:22:54.948010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.540 [2024-07-23 18:22:54.948036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.540 [2024-07-23 18:22:54.948050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.540 [2024-07-23 18:22:54.948063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.540 [2024-07-23 18:22:54.948091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-23 18:22:54.957916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.540 [2024-07-23 18:22:54.958008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.540 [2024-07-23 18:22:54.958033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.540 [2024-07-23 18:22:54.958047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.540 [2024-07-23 18:22:54.958060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.540 [2024-07-23 18:22:54.958087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-23 18:22:54.967947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.540 [2024-07-23 18:22:54.968036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.540 [2024-07-23 18:22:54.968060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.540 [2024-07-23 18:22:54.968074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.540 [2024-07-23 18:22:54.968087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:54.968120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:54.978010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:54.978106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:54.978131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:54.978146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:54.978158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:54.978186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:54.988042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:54.988147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:54.988176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:54.988192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:54.988206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:54.988235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:54.998032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:54.998124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:54.998149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:54.998163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:54.998177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:54.998205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.008061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.008157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.008182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.008196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.008209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.008239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.018112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.018207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.018240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.018255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.018268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.018296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.028115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.028248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.028273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.028287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.028300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.028334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.038159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.038246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.038270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.038284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.038297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.038331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.048192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.048282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.048306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.048327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.048342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.048370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.058310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.058413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.058438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.058452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.058465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.058499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.068235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.068330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.068356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.068371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.068384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.068413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.078244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.078370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.078396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.078410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.078423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.078451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.088288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.088386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.088412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.088426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.088439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.088466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.098329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.541 [2024-07-23 18:22:55.098440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.541 [2024-07-23 18:22:55.098465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.541 [2024-07-23 18:22:55.098479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.541 [2024-07-23 18:22:55.098491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.541 [2024-07-23 18:22:55.098520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-23 18:22:55.108374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.108474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.108504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.108519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.108531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.108560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.118378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.118479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.118504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.118519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.118531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.118559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.128398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.128493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.128518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.128532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.128545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.128572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.138429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.138525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.138550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.138564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.138577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.138606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.148462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.148559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.148584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.148598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.148611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.148645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.158548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.158639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.158664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.158678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.158690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.158718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.168548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.168637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.168663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.168677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.168691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.168719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.178550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.178647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.178673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.178687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.178699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.178728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.188574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.188664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.188689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.188703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.188716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.188743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-23 18:22:55.198608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.542 [2024-07-23 18:22:55.198744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.542 [2024-07-23 18:22:55.198775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.542 [2024-07-23 18:22:55.198791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.542 [2024-07-23 18:22:55.198804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.542 [2024-07-23 18:22:55.198838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.208653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.208781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.208806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.208821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.208833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.208862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.218683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.218781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.218806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.218820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.218832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.218860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.228683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.228785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.228809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.228823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.228835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.228863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.238762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.238888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.238913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.238927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.238945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.238973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.248783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.248893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.248921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.248937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.248951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.248980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.258779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.258876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.258901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.258916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.258929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.258956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.268867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.268976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.269005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.269020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.269033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.269061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.278858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.278951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.278976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.278990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.279003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.279032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.288848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.288940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.288965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.801 [2024-07-23 18:22:55.288980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.801 [2024-07-23 18:22:55.288993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.801 [2024-07-23 18:22:55.289021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.801 qpair failed and we were unable to recover it. 00:34:47.801 [2024-07-23 18:22:55.298881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.801 [2024-07-23 18:22:55.298976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.801 [2024-07-23 18:22:55.299001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.299016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.299028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.299056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.308892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.308981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.309006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.309021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.309034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.309061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.318949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.319040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.319065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.319079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.319092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.319119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.328992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.329092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.329116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.329130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.329149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.329177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.338984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.339079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.339104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.339118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.339131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.339159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.349024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.349116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.349140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.349154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.349166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.349193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.359048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.359138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.359164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.359178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.359192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.359220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.369083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.369171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.369198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.369214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.369230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.369261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.379150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.379269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.379295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.379309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.379330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.379362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.389151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.389269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.389294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.389309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.389329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.389358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.399162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.399252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.399277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.399290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.399303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.399338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.409171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.409259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.409284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.409298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.409311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.409348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.419225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.419335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.419369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.802 [2024-07-23 18:22:55.419386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.802 [2024-07-23 18:22:55.419404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.802 [2024-07-23 18:22:55.419434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.802 qpair failed and we were unable to recover it. 00:34:47.802 [2024-07-23 18:22:55.429275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.802 [2024-07-23 18:22:55.429405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.802 [2024-07-23 18:22:55.429431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.803 [2024-07-23 18:22:55.429445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.803 [2024-07-23 18:22:55.429458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.803 [2024-07-23 18:22:55.429486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.803 qpair failed and we were unable to recover it. 00:34:47.803 [2024-07-23 18:22:55.439314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.803 [2024-07-23 18:22:55.439416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.803 [2024-07-23 18:22:55.439442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.803 [2024-07-23 18:22:55.439456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.803 [2024-07-23 18:22:55.439469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.803 [2024-07-23 18:22:55.439497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.803 qpair failed and we were unable to recover it. 00:34:47.803 [2024-07-23 18:22:55.449291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.803 [2024-07-23 18:22:55.449400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.803 [2024-07-23 18:22:55.449425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.803 [2024-07-23 18:22:55.449439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.803 [2024-07-23 18:22:55.449452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.803 [2024-07-23 18:22:55.449479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.803 qpair failed and we were unable to recover it. 00:34:47.803 [2024-07-23 18:22:55.459350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.803 [2024-07-23 18:22:55.459453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.803 [2024-07-23 18:22:55.459479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.803 [2024-07-23 18:22:55.459494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.803 [2024-07-23 18:22:55.459507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:47.803 [2024-07-23 18:22:55.459536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.803 qpair failed and we were unable to recover it. 00:34:48.061 [2024-07-23 18:22:55.469371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.061 [2024-07-23 18:22:55.469467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.061 [2024-07-23 18:22:55.469493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.061 [2024-07-23 18:22:55.469507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.469520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.469549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.479397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.479496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.479523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.479538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.479551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.479579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.489418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.489506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.489531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.489545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.489558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.489586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.499464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.499558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.499583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.499598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.499610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.499638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.509524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.509641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.509666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.509686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.509700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.509728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.519496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.519594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.519621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.519635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.519648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.519676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.529576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.529668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.529696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.529711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.529724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.529752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.539553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.539652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.539678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.539692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.539704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.539732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.549624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.549723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.549749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.549763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.549776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.549804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.559703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.559795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.559820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.559834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.559847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.559875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.569634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.569720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.569746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.569760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.569773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.569801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.579685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.579778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.579803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.579817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.579830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.579858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.589697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.589789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.589814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.589829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.589842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.589870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.062 [2024-07-23 18:22:55.599786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.062 [2024-07-23 18:22:55.599911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.062 [2024-07-23 18:22:55.599936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.062 [2024-07-23 18:22:55.599956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.062 [2024-07-23 18:22:55.599971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.062 [2024-07-23 18:22:55.600002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.062 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.609832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.609948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.609973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.609987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.610000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.610028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.619777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.619871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.619895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.619909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.619922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.619950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.629816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.629908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.629934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.629948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.629961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.629989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.639827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.639920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.639945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.639959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.639972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.640000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.649892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.649997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.650022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.650036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.650049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.650077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.659946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.660049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.660074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.660088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.660102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.660130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.669941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.670040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.670065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.670079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.670092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.670119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.679953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.680052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.680077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.680091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.680104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.680132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.689978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.690072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.690099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.690119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.690133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.690162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.700006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.700107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.700132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.700146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.700160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.700187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.710015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.710137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.710162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.710176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.710189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.710218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.063 [2024-07-23 18:22:55.720089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.063 [2024-07-23 18:22:55.720185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.063 [2024-07-23 18:22:55.720211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.063 [2024-07-23 18:22:55.720225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.063 [2024-07-23 18:22:55.720238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.063 [2024-07-23 18:22:55.720266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.063 qpair failed and we were unable to recover it. 00:34:48.321 [2024-07-23 18:22:55.730115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.321 [2024-07-23 18:22:55.730238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.321 [2024-07-23 18:22:55.730264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.321 [2024-07-23 18:22:55.730278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.321 [2024-07-23 18:22:55.730291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.321 [2024-07-23 18:22:55.730326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.321 qpair failed and we were unable to recover it. 00:34:48.321 [2024-07-23 18:22:55.740128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.740222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.740247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.740261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.740275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.740302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.750146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.750239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.750265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.750279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.750292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.750327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.760193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.760289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.760314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.760337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.760350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.760379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.770197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.770289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.770315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.770338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.770351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.770379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.780226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.780359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.780392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.780407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.780420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.780448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.790243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.790344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.790370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.790384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.790397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.790425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.800368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.800462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.800487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.800501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.800514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.800543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.810306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.810402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.810430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.810444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.810458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.810488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.820359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.820454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.820479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.820493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.820505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.820535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.830376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.830472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.830497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.830511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.830524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.830552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.840413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.840528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.840557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.840572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.840585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.840614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.850500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.850595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.850620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.850634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.850647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.850676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.860470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.860560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.322 [2024-07-23 18:22:55.860586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.322 [2024-07-23 18:22:55.860600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.322 [2024-07-23 18:22:55.860612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.322 [2024-07-23 18:22:55.860641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.322 qpair failed and we were unable to recover it. 00:34:48.322 [2024-07-23 18:22:55.870510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.322 [2024-07-23 18:22:55.870608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.870639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.870656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.870669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.870697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.880538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.880654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.880679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.880693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.880705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.880734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.890606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.890722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.890747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.890761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.890774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.890802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.900631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.900729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.900755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.900770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.900783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.900811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.910595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.910686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.910711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.910725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.910738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.910772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.920672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.920795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.920820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.920834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.920847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.920874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.930644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.930777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.930802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.930815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.930828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.930855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.940764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.940872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.940901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.940916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.940929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.940958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.950738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.950838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.950864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.950878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.950891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.950919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.960733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.960822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.960852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.960867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.960880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.960908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.323 [2024-07-23 18:22:55.970804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.323 [2024-07-23 18:22:55.970892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.323 [2024-07-23 18:22:55.970917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.323 [2024-07-23 18:22:55.970931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.323 [2024-07-23 18:22:55.970944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.323 [2024-07-23 18:22:55.970972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.323 qpair failed and we were unable to recover it. 00:34:48.581 [2024-07-23 18:22:55.980797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.581 [2024-07-23 18:22:55.980892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.581 [2024-07-23 18:22:55.980918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.581 [2024-07-23 18:22:55.980932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.581 [2024-07-23 18:22:55.980945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.581 [2024-07-23 18:22:55.980974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.581 qpair failed and we were unable to recover it. 00:34:48.581 [2024-07-23 18:22:55.990813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.581 [2024-07-23 18:22:55.990905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.581 [2024-07-23 18:22:55.990931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.581 [2024-07-23 18:22:55.990945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.581 [2024-07-23 18:22:55.990958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.581 [2024-07-23 18:22:55.990986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.581 qpair failed and we were unable to recover it. 00:34:48.581 [2024-07-23 18:22:56.000847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.581 [2024-07-23 18:22:56.000942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.581 [2024-07-23 18:22:56.000967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.581 [2024-07-23 18:22:56.000981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.581 [2024-07-23 18:22:56.000993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.001027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.010914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.011037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.011065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.011080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.011093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.011122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.020957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.021048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.021074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.021088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.021100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.021128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.031041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.031172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.031198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.031212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.031225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.031253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.041026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.041123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.041148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.041162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.041175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.041203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.051032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.051151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.051181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.051198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.051210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.051238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.061109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.061205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.061230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.061243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.061256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.061284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.071080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.071180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.071205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.071219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.071232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.071259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.081101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.081223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.081247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.081261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.081273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.081302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.091103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.091204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.091229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.091243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.091256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.091289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.101152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.101247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.101273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.101287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.101300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.101335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.111200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.111290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.111321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.111339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.111351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.111380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.121185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.121278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.121303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.121323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.121337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.121364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.131230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.582 [2024-07-23 18:22:56.131330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.582 [2024-07-23 18:22:56.131358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.582 [2024-07-23 18:22:56.131372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.582 [2024-07-23 18:22:56.131385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.582 [2024-07-23 18:22:56.131414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.582 qpair failed and we were unable to recover it. 00:34:48.582 [2024-07-23 18:22:56.141283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.141428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.141458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.141472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.141486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.141514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.151308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.151428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.151453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.151467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.151480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.151508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.161326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.161414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.161439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.161453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.161466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.161494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.171353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.171463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.171488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.171502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.171515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.171543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.181376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.181469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.181494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.181508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.181527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.181556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.191411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.191503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.191528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.191542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.191555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.191583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.201425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.201514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.201538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.201553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.201565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.201593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.211520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.211608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.211636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.211653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.211665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.211694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.221525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.221615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.221641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.221655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.221668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.221697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.583 [2024-07-23 18:22:56.231526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.583 [2024-07-23 18:22:56.231619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.583 [2024-07-23 18:22:56.231644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.583 [2024-07-23 18:22:56.231658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.583 [2024-07-23 18:22:56.231671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.583 [2024-07-23 18:22:56.231699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.583 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.241586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.241675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.241701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.241715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.241728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.241756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.251563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.251653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.251678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.251693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.251706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.251734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.261697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.261828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.261854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.261868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.261881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.261909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.271630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.271726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.271752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.271766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.271784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.271813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.281675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.281770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.281795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.281809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.281822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.281850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.291754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.291849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.291876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.291890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.291903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.291931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.301709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.301807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.301832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.301847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.301859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.301886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.311726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.311852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.311879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.311893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.311908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.311938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.321776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.321882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.321909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.321923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.321935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.321964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.331818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.331909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.331934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.331948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.331961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.331990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.341856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.341952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.842 [2024-07-23 18:22:56.341979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.842 [2024-07-23 18:22:56.341993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.842 [2024-07-23 18:22:56.342006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.842 [2024-07-23 18:22:56.342034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.842 qpair failed and we were unable to recover it. 00:34:48.842 [2024-07-23 18:22:56.351843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.842 [2024-07-23 18:22:56.351938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.351963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.351977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.351989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.352016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.361910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.362007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.362033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.362047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.362066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.362096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.371925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.372034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.372061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.372076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.372092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.372120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.381947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.382041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.382067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.382081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.382094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.382122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.391959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.392064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.392089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.392103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.392116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.392144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.401982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.402125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.402154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.402170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.402183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.402212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.412009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.412106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.412132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.412146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.412159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.412187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.422034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.422130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.422156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.422170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.422183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.422211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.432093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.432199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.432227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.432243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.432256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.432285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.442110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.442206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.442232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.442246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.442258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.442287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.452132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.452249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.452274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.452293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.452307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.452347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.462182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.462296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.462331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.462347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.462361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.462390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.472201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.843 [2024-07-23 18:22:56.472300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.843 [2024-07-23 18:22:56.472334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.843 [2024-07-23 18:22:56.472349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.843 [2024-07-23 18:22:56.472362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.843 [2024-07-23 18:22:56.472390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.843 qpair failed and we were unable to recover it. 00:34:48.843 [2024-07-23 18:22:56.482252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.844 [2024-07-23 18:22:56.482385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.844 [2024-07-23 18:22:56.482410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.844 [2024-07-23 18:22:56.482424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.844 [2024-07-23 18:22:56.482437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.844 [2024-07-23 18:22:56.482466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.844 qpair failed and we were unable to recover it. 00:34:48.844 [2024-07-23 18:22:56.492243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.844 [2024-07-23 18:22:56.492336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.844 [2024-07-23 18:22:56.492361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.844 [2024-07-23 18:22:56.492375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.844 [2024-07-23 18:22:56.492389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:48.844 [2024-07-23 18:22:56.492418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.844 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.502295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.502438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.502465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.502479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.502492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.502521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.512343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.512439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.512465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.512479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.512493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.512522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.522339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.522436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.522463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.522478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.522491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.522519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.532398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.532492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.532518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.532532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.532544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.532573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.542418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.542514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.542540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.542563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.542576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.542606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.552457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.552553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.552579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.552593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.552606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.552634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.562444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.562532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.562557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.562571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.562584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.562612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.572510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.572616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.572645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.572660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.572674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.572704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.582533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.582632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.582658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.582673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.582686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.582713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.592566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.103 [2024-07-23 18:22:56.592684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.103 [2024-07-23 18:22:56.592710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.103 [2024-07-23 18:22:56.592724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.103 [2024-07-23 18:22:56.592740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.103 [2024-07-23 18:22:56.592770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.103 qpair failed and we were unable to recover it. 00:34:49.103 [2024-07-23 18:22:56.602595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.602723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.602749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.602763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.602776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.602804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.612581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.612664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.612689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.612703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.612715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.612743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.622630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.622724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.622749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.622763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.622776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.622804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.632640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.632731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.632756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.632776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.632790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.632819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.642686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.642806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.642831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.642845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.642857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.642886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.652689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.652775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.652801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.652815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.652827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.652855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.662748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.662839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.662864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.662878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.662891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.662918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.672791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.672885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.672910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.672923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.672936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.672964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.682792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.682885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.682910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.682924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.682937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.682964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.692858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.692950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.692978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.692993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.693005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.693033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.702839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.702932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.702958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.702972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.702985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.703012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.104 [2024-07-23 18:22:56.712920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.104 [2024-07-23 18:22:56.713012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.104 [2024-07-23 18:22:56.713037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.104 [2024-07-23 18:22:56.713052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.104 [2024-07-23 18:22:56.713064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.104 [2024-07-23 18:22:56.713092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.104 qpair failed and we were unable to recover it. 00:34:49.105 [2024-07-23 18:22:56.722944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.105 [2024-07-23 18:22:56.723067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.105 [2024-07-23 18:22:56.723097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.105 [2024-07-23 18:22:56.723113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.105 [2024-07-23 18:22:56.723125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.105 [2024-07-23 18:22:56.723153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.105 qpair failed and we were unable to recover it. 00:34:49.105 [2024-07-23 18:22:56.732937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.105 [2024-07-23 18:22:56.733020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.105 [2024-07-23 18:22:56.733045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.105 [2024-07-23 18:22:56.733059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.105 [2024-07-23 18:22:56.733072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.105 [2024-07-23 18:22:56.733101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.105 qpair failed and we were unable to recover it. 00:34:49.105 [2024-07-23 18:22:56.742974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.105 [2024-07-23 18:22:56.743067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.105 [2024-07-23 18:22:56.743092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.105 [2024-07-23 18:22:56.743106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.105 [2024-07-23 18:22:56.743119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.105 [2024-07-23 18:22:56.743146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.105 qpair failed and we were unable to recover it. 00:34:49.105 [2024-07-23 18:22:56.753002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.105 [2024-07-23 18:22:56.753111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.105 [2024-07-23 18:22:56.753136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.105 [2024-07-23 18:22:56.753150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.105 [2024-07-23 18:22:56.753163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.105 [2024-07-23 18:22:56.753191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.105 qpair failed and we were unable to recover it. 00:34:49.363 [2024-07-23 18:22:56.763037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.363 [2024-07-23 18:22:56.763124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.363 [2024-07-23 18:22:56.763151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.363 [2024-07-23 18:22:56.763166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.363 [2024-07-23 18:22:56.763179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.363 [2024-07-23 18:22:56.763207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.363 qpair failed and we were unable to recover it. 00:34:49.363 [2024-07-23 18:22:56.773048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.363 [2024-07-23 18:22:56.773140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.773166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.773180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.773193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.773221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.783086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.783181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.783207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.783221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.783234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.783262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.793106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.793191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.793217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.793231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.793244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.793271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.803284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.803428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.803453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.803467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.803480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.803507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.813174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.813285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.813321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.813338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.813352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.813382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.823221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.823323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.823349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.823363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.823375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.823404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.833222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.833332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.833357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.833372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.833384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.833412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.843272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.843372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.843398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.843411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.843424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.843453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.853314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.853413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.853438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.853452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.853465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.853499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.863326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.863420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.863444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.863459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.863471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.863500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.873336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.873427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.873452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.873466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.873479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.873507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.883374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.364 [2024-07-23 18:22:56.883464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.364 [2024-07-23 18:22:56.883490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.364 [2024-07-23 18:22:56.883504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.364 [2024-07-23 18:22:56.883516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.364 [2024-07-23 18:22:56.883544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.364 qpair failed and we were unable to recover it. 00:34:49.364 [2024-07-23 18:22:56.893476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.893579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.893604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.893618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.893631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.893660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.903448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.903537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.903568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.903583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.903595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.903623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.913481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.913575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.913600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.913613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.913626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.913654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.923492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.923578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.923603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.923617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.923630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.923657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.933514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.933604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.933633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.933648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.933661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.933690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.943546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.943692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.943717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.943732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.943745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.943779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.953582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.953678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.953704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.953717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.953730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.953758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.963618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.963707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.963732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.963746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.963759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.963787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.973624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.973708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.973733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.973747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.973760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.973788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.983640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.983759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.983784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.983799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.983812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.983840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:56.993675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:56.993764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:56.993794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:56.993809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:56.993822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:56.993849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:57.003770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:57.003864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:57.003890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:57.003904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:57.003917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:57.003945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.365 [2024-07-23 18:22:57.013759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.365 [2024-07-23 18:22:57.013856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.365 [2024-07-23 18:22:57.013884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.365 [2024-07-23 18:22:57.013900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.365 [2024-07-23 18:22:57.013913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.365 [2024-07-23 18:22:57.013942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.365 qpair failed and we were unable to recover it. 00:34:49.623 [2024-07-23 18:22:57.023805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.623 [2024-07-23 18:22:57.023908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.623 [2024-07-23 18:22:57.023934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.623 [2024-07-23 18:22:57.023949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.623 [2024-07-23 18:22:57.023962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.023990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.033806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.033944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.033969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.033984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.033996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.034030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.043865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.043958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.043983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.043997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.044010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.044038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.053856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.053952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.053977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.053991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.054005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.054033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.063922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.064026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.064055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.064070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.064083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.064113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.073896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.074031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.074057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.074071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.074084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.074112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.083947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.084036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.084066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.084081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.084093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.084123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.093961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.094051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.094075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.094089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.094102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.094130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.104003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.104099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.104124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.104138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.104151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.104179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.114025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.114119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.114144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.114159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.114172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.114199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.124075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.124198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.124224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.124237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.124256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.124285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.134089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.134189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.134218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.134233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.134246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.134275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.144128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.144221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.144250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.144265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.144278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.144306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.154164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.154252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.154277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.154291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.154304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.154341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.164198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.164340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.164365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.164379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.164391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.164419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.174206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.174302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.174340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.174359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.174372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.174401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.184259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.184406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.184435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.184449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.184462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.184492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.194247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.194346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.194372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.194386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.194399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.194427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.204271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.204366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.204391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.204405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.204418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.204446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.214325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.214447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.214472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.214486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.214504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.214533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.224345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.224436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.224461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.224475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.224488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.224516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.234362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.234452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.234477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.234491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.234504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.234533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.244392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.244481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.244506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.244521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.244534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.244562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.254408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.254492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.254517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.254531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.254544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.254572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.264485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.264589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.264614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.264628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.264641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.264669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.624 [2024-07-23 18:22:57.274500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.624 [2024-07-23 18:22:57.274600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.624 [2024-07-23 18:22:57.274626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.624 [2024-07-23 18:22:57.274640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.624 [2024-07-23 18:22:57.274653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.624 [2024-07-23 18:22:57.274681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.624 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.284565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.284665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.284691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.284705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.284718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.284746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.294552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.294644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.294669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.294683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.294696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.294724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.304572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.304666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.304692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.304706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.304725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.304754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.314602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.314722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.314746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.314760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.314773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.314801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.324617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.324709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.324734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.324748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.324760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.324788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.334641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.334731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.334757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.334771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.334784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.334812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.344722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.344825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.344849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.344863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.344876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.344904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.354767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.354858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.354882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.354896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.354908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.354936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.364769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.882 [2024-07-23 18:22:57.364902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.882 [2024-07-23 18:22:57.364927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.882 [2024-07-23 18:22:57.364941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.882 [2024-07-23 18:22:57.364954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.882 [2024-07-23 18:22:57.364981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.882 qpair failed and we were unable to recover it. 00:34:49.882 [2024-07-23 18:22:57.374791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.374895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.374920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.374935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.374948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.374975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.384823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.384919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.384945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.384959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.384972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.385000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.394816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.394952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.394977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.394997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.395010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.395040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.404880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.404970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.404996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.405009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.405022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.405050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.414916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.415024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.415048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.415062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.415074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.415102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.424928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.425020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.425045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.425059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.425071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.425099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.434976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.435070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.435096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.435110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.435123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.435151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.444976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.445067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.445093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.445108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.445120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.445148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.455024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.455115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.455141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.455155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.455167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.455195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.465052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.465147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.465173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.465187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.465200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.465228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.475068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.475161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.475186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.475200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.475213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.475241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.485087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.485182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.485210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.485232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.485246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.485276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.495128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.495260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.495288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.495303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.495323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.495355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.505147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.505242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.505268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.505282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.505294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.505330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.515263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.515363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.515392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.515407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.515420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.515449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.525189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.525288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.525314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.525338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.525351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.525379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:49.883 [2024-07-23 18:22:57.535263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.883 [2024-07-23 18:22:57.535361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.883 [2024-07-23 18:22:57.535387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.883 [2024-07-23 18:22:57.535401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.883 [2024-07-23 18:22:57.535414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:49.883 [2024-07-23 18:22:57.535444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.883 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.545351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.545447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.545474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.545489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.545502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.545530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.555282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.555390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.555417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.555432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.555445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.555474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.565333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.565442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.565468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.565482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.565494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.565523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.575384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.575481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.575506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.575526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.575540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.575568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.585383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.585478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.585503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.585517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.585530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.585558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.595501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.595599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.595624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.595638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.595652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.595679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.605434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.605524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.605549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.605563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.605576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b3f40 00:34:50.141 [2024-07-23 18:22:57.605604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.615484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.615581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.615614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.615630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.615644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:50.141 [2024-07-23 18:22:57.615677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.625583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.625676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.625703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.625717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.625731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6320000b90 00:34:50.141 [2024-07-23 18:22:57.625761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.635542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.635644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.635677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.635696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.635710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6328000b90 00:34:50.141 [2024-07-23 18:22:57.635744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.645586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.645678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.645705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.645720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.645733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6328000b90 00:34:50.141 [2024-07-23 18:22:57.645764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.645884] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:50.141 A controller has encountered a failure and is being reset. 00:34:50.141 [2024-07-23 18:22:57.655589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.655682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.655714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.655733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.655746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6330000b90 00:34:50.141 [2024-07-23 18:22:57.655777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 [2024-07-23 18:22:57.665643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.141 [2024-07-23 18:22:57.665743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.141 [2024-07-23 18:22:57.665771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.141 [2024-07-23 18:22:57.665786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.141 [2024-07-23 18:22:57.665798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6330000b90 00:34:50.141 [2024-07-23 18:22:57.665829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.141 qpair failed and we were unable to recover it. 00:34:50.141 Controller properly reset. 00:34:50.141 Initializing NVMe Controllers 00:34:50.141 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:50.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:50.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:50.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:50.141 Initialization complete. Launching workers. 00:34:50.141 Starting thread on core 1 00:34:50.141 Starting thread on core 2 00:34:50.141 Starting thread on core 3 00:34:50.141 Starting thread on core 0 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:50.141 00:34:50.141 real 0m10.846s 00:34:50.141 user 0m18.563s 00:34:50.141 sys 0m5.320s 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.141 ************************************ 00:34:50.141 END TEST nvmf_target_disconnect_tc2 00:34:50.141 ************************************ 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.141 rmmod nvme_tcp 00:34:50.141 rmmod nvme_fabrics 00:34:50.141 rmmod nvme_keyring 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2505207 ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2505207 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2505207 ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2505207 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2505207 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2505207' 00:34:50.141 killing process with pid 2505207 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2505207 00:34:50.141 18:22:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2505207 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.399 18:22:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:52.931 00:34:52.931 real 0m15.733s 00:34:52.931 user 0m44.931s 00:34:52.931 sys 0m7.389s 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:52.931 ************************************ 00:34:52.931 END TEST nvmf_target_disconnect 00:34:52.931 ************************************ 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:52.931 00:34:52.931 real 6m28.305s 00:34:52.931 user 16m39.278s 00:34:52.931 sys 1m25.808s 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:52.931 18:23:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.931 ************************************ 00:34:52.931 END TEST nvmf_host 00:34:52.931 ************************************ 00:34:52.931 18:23:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:52.931 00:34:52.931 real 26m54.974s 00:34:52.931 user 73m19.724s 00:34:52.931 sys 6m21.477s 00:34:52.931 18:23:00 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:52.931 18:23:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.931 ************************************ 00:34:52.931 END TEST nvmf_tcp 00:34:52.931 ************************************ 00:34:52.931 18:23:00 -- common/autotest_common.sh@1142 -- # return 0 00:34:52.931 18:23:00 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:52.931 18:23:00 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.931 18:23:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:52.931 18:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.931 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:34:52.931 ************************************ 00:34:52.931 START TEST spdkcli_nvmf_tcp 00:34:52.931 ************************************ 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.931 * Looking for test storage... 00:34:52.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.931 18:23:00 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2506406 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2506406 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2506406 ']' 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.932 [2024-07-23 18:23:00.279805] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:34:52.932 [2024-07-23 18:23:00.279890] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506406 ] 00:34:52.932 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.932 [2024-07-23 18:23:00.336795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:52.932 [2024-07-23 18:23:00.421267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.932 [2024-07-23 18:23:00.421269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.932 18:23:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:52.932 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:52.932 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:52.932 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:52.932 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:52.932 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:52.932 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:52.932 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.932 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.932 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:52.932 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:52.932 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:52.932 ' 00:34:55.456 [2024-07-23 18:23:03.078973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.825 [2024-07-23 18:23:04.303326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:59.346 [2024-07-23 18:23:06.566140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:01.241 [2024-07-23 18:23:08.508013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:02.610 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:02.610 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:02.610 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.610 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.610 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:02.610 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:02.610 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:02.610 18:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.174 18:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:03.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:03.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:03.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:03.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:03.174 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:03.174 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:03.174 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:03.174 ' 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:08.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:08.426 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:08.426 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.426 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:08.426 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:08.426 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:08.426 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:08.427 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:08.427 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2506406 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2506406 ']' 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2506406 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2506406 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2506406' 00:35:08.427 killing process with pid 2506406 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2506406 00:35:08.427 18:23:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2506406 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2506406 ']' 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2506406 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2506406 ']' 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2506406 00:35:08.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2506406) - No such process 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2506406 is not found' 00:35:08.685 Process with pid 2506406 is not found 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:08.685 00:35:08.685 real 0m15.941s 00:35:08.685 user 0m33.710s 00:35:08.685 sys 0m0.775s 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.685 18:23:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.685 ************************************ 00:35:08.685 END TEST spdkcli_nvmf_tcp 00:35:08.685 ************************************ 00:35:08.685 18:23:16 -- common/autotest_common.sh@1142 -- # return 0 00:35:08.685 18:23:16 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.685 18:23:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:08.685 18:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.685 18:23:16 -- common/autotest_common.sh@10 -- # set +x 00:35:08.685 ************************************ 00:35:08.685 START TEST nvmf_identify_passthru 00:35:08.685 ************************************ 00:35:08.685 18:23:16 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.685 * Looking for test storage... 00:35:08.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.685 18:23:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:08.686 18:23:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.686 18:23:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.686 18:23:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.686 18:23:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:08.686 18:23:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:08.686 18:23:16 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:08.686 18:23:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:10.588 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:10.588 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:10.588 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:10.588 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:10.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.589 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:10.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:35:10.847 00:35:10.847 --- 10.0.0.2 ping statistics --- 00:35:10.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.847 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:35:10.847 00:35:10.847 --- 10.0.0.1 ping statistics --- 00:35:10.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.847 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.847 18:23:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:10.847 18:23:18 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:10.847 18:23:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:10.847 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.118 18:23:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:15.118 18:23:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:15.118 18:23:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:15.118 18:23:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:15.118 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2510900 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.299 18:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2510900 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2510900 ']' 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:19.299 18:23:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.299 [2024-07-23 18:23:26.900712] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:35:19.299 [2024-07-23 18:23:26.900821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.299 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.556 [2024-07-23 18:23:26.968963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.556 [2024-07-23 18:23:27.057927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.556 [2024-07-23 18:23:27.057989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.557 [2024-07-23 18:23:27.058016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.557 [2024-07-23 18:23:27.058028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.557 [2024-07-23 18:23:27.058038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.557 [2024-07-23 18:23:27.058089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.557 [2024-07-23 18:23:27.058147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.557 [2024-07-23 18:23:27.058217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.557 [2024-07-23 18:23:27.058214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:19.557 18:23:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.557 INFO: Log level set to 20 00:35:19.557 INFO: Requests: 00:35:19.557 { 00:35:19.557 "jsonrpc": "2.0", 00:35:19.557 "method": "nvmf_set_config", 00:35:19.557 "id": 1, 00:35:19.557 "params": { 00:35:19.557 "admin_cmd_passthru": { 00:35:19.557 "identify_ctrlr": true 00:35:19.557 } 00:35:19.557 } 00:35:19.557 } 00:35:19.557 00:35:19.557 INFO: response: 00:35:19.557 { 00:35:19.557 "jsonrpc": "2.0", 00:35:19.557 "id": 1, 00:35:19.557 "result": true 00:35:19.557 } 00:35:19.557 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.557 18:23:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.557 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.557 INFO: Setting log level to 20 00:35:19.557 INFO: Setting log level to 20 00:35:19.557 INFO: Log level set to 20 00:35:19.557 INFO: Log level set to 20 00:35:19.557 INFO: Requests: 00:35:19.557 { 00:35:19.557 "jsonrpc": "2.0", 00:35:19.557 "method": "framework_start_init", 00:35:19.557 "id": 1 00:35:19.557 } 00:35:19.557 00:35:19.557 INFO: Requests: 00:35:19.557 { 00:35:19.557 "jsonrpc": "2.0", 00:35:19.557 "method": "framework_start_init", 00:35:19.557 "id": 1 00:35:19.557 } 00:35:19.557 00:35:19.814 [2024-07-23 18:23:27.228665] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:19.814 INFO: response: 00:35:19.814 { 00:35:19.814 "jsonrpc": "2.0", 00:35:19.814 "id": 1, 00:35:19.814 "result": true 00:35:19.814 } 00:35:19.814 00:35:19.814 INFO: response: 00:35:19.814 { 00:35:19.814 "jsonrpc": "2.0", 00:35:19.814 "id": 1, 00:35:19.814 "result": true 00:35:19.814 } 00:35:19.814 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.814 18:23:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.814 INFO: Setting log level to 40 00:35:19.814 INFO: Setting log level to 40 00:35:19.814 INFO: Setting log level to 40 00:35:19.814 [2024-07-23 18:23:27.238789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.814 18:23:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.814 18:23:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.814 18:23:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.089 Nvme0n1 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.089 [2024-07-23 18:23:30.137706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.089 [ 00:35:23.089 { 00:35:23.089 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:23.089 "subtype": "Discovery", 00:35:23.089 "listen_addresses": [], 00:35:23.089 "allow_any_host": true, 00:35:23.089 "hosts": [] 00:35:23.089 }, 00:35:23.089 { 00:35:23.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.089 "subtype": "NVMe", 00:35:23.089 "listen_addresses": [ 00:35:23.089 { 00:35:23.089 "trtype": "TCP", 00:35:23.089 "adrfam": "IPv4", 00:35:23.089 "traddr": "10.0.0.2", 00:35:23.089 "trsvcid": "4420" 00:35:23.089 } 00:35:23.089 ], 00:35:23.089 "allow_any_host": true, 00:35:23.089 "hosts": [], 00:35:23.089 "serial_number": "SPDK00000000000001", 00:35:23.089 "model_number": "SPDK bdev Controller", 00:35:23.089 "max_namespaces": 1, 00:35:23.089 "min_cntlid": 1, 00:35:23.089 "max_cntlid": 65519, 00:35:23.089 "namespaces": [ 00:35:23.089 { 00:35:23.089 "nsid": 1, 00:35:23.089 "bdev_name": "Nvme0n1", 00:35:23.089 "name": "Nvme0n1", 00:35:23.089 "nguid": "44CE673590BA4F53B8E929B2DFC79071", 00:35:23.089 "uuid": "44ce6735-90ba-4f53-b8e9-29b2dfc79071" 00:35:23.089 } 00:35:23.089 ] 00:35:23.089 } 00:35:23.089 ] 00:35:23.089 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:23.089 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:23.090 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:23.090 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:23.090 18:23:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:23.090 rmmod nvme_tcp 00:35:23.090 rmmod nvme_fabrics 00:35:23.090 rmmod nvme_keyring 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2510900 ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2510900 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2510900 ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2510900 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2510900 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2510900' 00:35:23.090 killing process with pid 2510900 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2510900 00:35:23.090 18:23:30 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2510900 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:24.987 18:23:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.987 18:23:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:24.987 18:23:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.908 18:23:34 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:26.908 00:35:26.908 real 0m18.164s 00:35:26.908 user 0m27.213s 00:35:26.908 sys 0m2.370s 00:35:26.908 18:23:34 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:26.908 18:23:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.908 ************************************ 00:35:26.908 END TEST nvmf_identify_passthru 00:35:26.908 ************************************ 00:35:26.908 18:23:34 -- common/autotest_common.sh@1142 -- # return 0 00:35:26.908 18:23:34 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:26.908 18:23:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:26.908 18:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:26.908 18:23:34 -- common/autotest_common.sh@10 -- # set +x 00:35:26.908 ************************************ 00:35:26.908 START TEST nvmf_dif 00:35:26.908 ************************************ 00:35:26.908 18:23:34 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:26.908 * Looking for test storage... 00:35:26.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.908 18:23:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.908 18:23:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.908 18:23:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.908 18:23:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.908 18:23:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.908 18:23:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.908 18:23:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:26.908 18:23:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:26.908 18:23:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.908 18:23:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.908 18:23:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.908 18:23:34 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.908 18:23:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:28.810 18:23:36 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:29.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:29.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:29.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:29.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:29.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:35:29.069 00:35:29.069 --- 10.0.0.2 ping statistics --- 00:35:29.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.069 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:35:29.069 00:35:29.069 --- 10.0.0.1 ping statistics --- 00:35:29.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.069 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:29.069 18:23:36 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:30.002 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:30.002 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:30.002 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:30.002 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:30.002 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:30.002 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:30.002 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:30.002 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:30.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:30.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:30.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:30.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:30.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:30.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:30.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:30.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:30.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:30.260 18:23:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:30.260 18:23:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2514161 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:30.260 18:23:37 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2514161 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2514161 ']' 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:30.260 18:23:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.260 [2024-07-23 18:23:37.890032] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:35:30.261 [2024-07-23 18:23:37.890109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.518 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.518 [2024-07-23 18:23:37.953943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.518 [2024-07-23 18:23:38.033766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.518 [2024-07-23 18:23:38.033824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.518 [2024-07-23 18:23:38.033860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.518 [2024-07-23 18:23:38.033871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.518 [2024-07-23 18:23:38.033880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.518 [2024-07-23 18:23:38.033905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:30.518 18:23:38 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.518 18:23:38 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.518 18:23:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:30.518 18:23:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.518 [2024-07-23 18:23:38.170573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.518 18:23:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.518 18:23:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.776 ************************************ 00:35:30.776 START TEST fio_dif_1_default 00:35:30.776 ************************************ 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.777 bdev_null0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.777 [2024-07-23 18:23:38.226883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.777 { 00:35:30.777 "params": { 00:35:30.777 "name": "Nvme$subsystem", 00:35:30.777 "trtype": "$TEST_TRANSPORT", 00:35:30.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.777 "adrfam": "ipv4", 00:35:30.777 "trsvcid": "$NVMF_PORT", 00:35:30.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.777 "hdgst": ${hdgst:-false}, 00:35:30.777 "ddgst": ${ddgst:-false} 00:35:30.777 }, 00:35:30.777 "method": "bdev_nvme_attach_controller" 00:35:30.777 } 00:35:30.777 EOF 00:35:30.777 )") 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.777 "params": { 00:35:30.777 "name": "Nvme0", 00:35:30.777 "trtype": "tcp", 00:35:30.777 "traddr": "10.0.0.2", 00:35:30.777 "adrfam": "ipv4", 00:35:30.777 "trsvcid": "4420", 00:35:30.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.777 "hdgst": false, 00:35:30.777 "ddgst": false 00:35:30.777 }, 00:35:30.777 "method": "bdev_nvme_attach_controller" 00:35:30.777 }' 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.777 18:23:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.035 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:31.035 fio-3.35 00:35:31.035 Starting 1 thread 00:35:31.035 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.263 00:35:43.263 filename0: (groupid=0, jobs=1): err= 0: pid=2514388: Tue Jul 23 18:23:49 2024 00:35:43.263 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:35:43.263 slat (nsec): min=4471, max=46389, avg=9402.05, stdev=2792.32 00:35:43.263 clat (usec): min=40785, max=47073, avg=40995.79, stdev=390.28 00:35:43.263 lat (usec): min=40808, max=47100, avg=41005.19, stdev=390.36 00:35:43.263 clat percentiles (usec): 00:35:43.263 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:43.263 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:43.263 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:43.263 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:35:43.263 | 99.99th=[46924] 00:35:43.263 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:35:43.263 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:43.263 lat (msec) : 50=100.00% 00:35:43.263 cpu : usr=89.32%, sys=10.31%, ctx=42, majf=0, minf=235 00:35:43.263 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.263 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.263 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:43.263 00:35:43.263 Run status group 0 (all jobs): 00:35:43.263 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.263 00:35:43.263 real 0m11.079s 00:35:43.263 user 0m10.094s 00:35:43.263 sys 0m1.284s 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:43.263 ************************************ 00:35:43.263 END TEST fio_dif_1_default 00:35:43.263 ************************************ 00:35:43.263 18:23:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:43.263 18:23:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:43.263 18:23:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:43.263 18:23:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:43.263 18:23:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:43.263 ************************************ 00:35:43.263 START TEST fio_dif_1_multi_subsystems 00:35:43.263 ************************************ 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:43.263 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 bdev_null0 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 [2024-07-23 18:23:49.353779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 bdev_null1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:43.264 { 00:35:43.264 "params": { 00:35:43.264 "name": "Nvme$subsystem", 00:35:43.264 "trtype": "$TEST_TRANSPORT", 00:35:43.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.264 "adrfam": "ipv4", 00:35:43.264 "trsvcid": "$NVMF_PORT", 00:35:43.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.264 "hdgst": ${hdgst:-false}, 00:35:43.264 "ddgst": ${ddgst:-false} 00:35:43.264 }, 00:35:43.264 "method": "bdev_nvme_attach_controller" 00:35:43.264 } 00:35:43.264 EOF 00:35:43.264 )") 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:43.264 { 00:35:43.264 "params": { 00:35:43.264 "name": "Nvme$subsystem", 00:35:43.264 "trtype": "$TEST_TRANSPORT", 00:35:43.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.264 "adrfam": "ipv4", 00:35:43.264 "trsvcid": "$NVMF_PORT", 00:35:43.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.264 "hdgst": ${hdgst:-false}, 00:35:43.264 "ddgst": ${ddgst:-false} 00:35:43.264 }, 00:35:43.264 "method": "bdev_nvme_attach_controller" 00:35:43.264 } 00:35:43.264 EOF 00:35:43.264 )") 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:43.264 "params": { 00:35:43.264 "name": "Nvme0", 00:35:43.264 "trtype": "tcp", 00:35:43.264 "traddr": "10.0.0.2", 00:35:43.264 "adrfam": "ipv4", 00:35:43.264 "trsvcid": "4420", 00:35:43.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.264 "hdgst": false, 00:35:43.264 "ddgst": false 00:35:43.264 }, 00:35:43.264 "method": "bdev_nvme_attach_controller" 00:35:43.264 },{ 00:35:43.264 "params": { 00:35:43.264 "name": "Nvme1", 00:35:43.264 "trtype": "tcp", 00:35:43.264 "traddr": "10.0.0.2", 00:35:43.264 "adrfam": "ipv4", 00:35:43.264 "trsvcid": "4420", 00:35:43.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.264 "hdgst": false, 00:35:43.264 "ddgst": false 00:35:43.264 }, 00:35:43.264 "method": "bdev_nvme_attach_controller" 00:35:43.264 }' 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:43.264 18:23:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.264 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:43.264 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:43.264 fio-3.35 00:35:43.264 Starting 2 threads 00:35:43.264 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.224 00:35:53.224 filename0: (groupid=0, jobs=1): err= 0: pid=2515671: Tue Jul 23 18:24:00 2024 00:35:53.224 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:35:53.224 slat (nsec): min=7578, max=41892, avg=9688.80, stdev=2609.90 00:35:53.224 clat (usec): min=40875, max=43677, avg=40981.21, stdev=174.05 00:35:53.224 lat (usec): min=40883, max=43706, avg=40990.90, stdev=174.46 00:35:53.224 clat percentiles (usec): 00:35:53.224 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:53.224 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:53.224 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:53.224 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:35:53.224 | 99.99th=[43779] 00:35:53.224 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:35:53.224 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:53.224 lat (msec) : 50=100.00% 00:35:53.224 cpu : usr=94.68%, sys=5.05%, ctx=10, majf=0, minf=199 00:35:53.224 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.224 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.224 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:53.224 filename1: (groupid=0, jobs=1): err= 0: pid=2515672: Tue Jul 23 18:24:00 2024 00:35:53.224 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:35:53.224 slat (nsec): min=7274, max=29530, avg=9726.00, stdev=2296.35 00:35:53.224 clat (usec): min=40816, max=44645, avg=40985.12, stdev=235.07 00:35:53.224 lat (usec): min=40825, max=44674, avg=40994.85, stdev=235.45 00:35:53.224 clat percentiles (usec): 00:35:53.224 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:53.224 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:53.224 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:53.224 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:35:53.224 | 99.99th=[44827] 00:35:53.224 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:35:53.224 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:53.224 lat (msec) : 50=100.00% 00:35:53.224 cpu : usr=94.35%, sys=5.35%, ctx=22, majf=0, minf=55 00:35:53.224 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.224 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.224 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:53.224 00:35:53.224 Run status group 0 (all jobs): 00:35:53.224 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10007-10008msec 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.224 00:35:53.224 real 0m11.247s 00:35:53.224 user 0m20.139s 00:35:53.224 sys 0m1.311s 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 ************************************ 00:35:53.224 END TEST fio_dif_1_multi_subsystems 00:35:53.224 ************************************ 00:35:53.224 18:24:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:53.224 18:24:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:53.224 18:24:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:53.224 18:24:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 ************************************ 00:35:53.224 START TEST fio_dif_rand_params 00:35:53.224 ************************************ 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.224 bdev_null0 00:35:53.224 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.225 [2024-07-23 18:24:00.647180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.225 { 00:35:53.225 "params": { 00:35:53.225 "name": "Nvme$subsystem", 00:35:53.225 "trtype": "$TEST_TRANSPORT", 00:35:53.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.225 "adrfam": "ipv4", 00:35:53.225 "trsvcid": "$NVMF_PORT", 00:35:53.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.225 "hdgst": ${hdgst:-false}, 00:35:53.225 "ddgst": ${ddgst:-false} 00:35:53.225 }, 00:35:53.225 "method": "bdev_nvme_attach_controller" 00:35:53.225 } 00:35:53.225 EOF 00:35:53.225 )") 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.225 "params": { 00:35:53.225 "name": "Nvme0", 00:35:53.225 "trtype": "tcp", 00:35:53.225 "traddr": "10.0.0.2", 00:35:53.225 "adrfam": "ipv4", 00:35:53.225 "trsvcid": "4420", 00:35:53.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.225 "hdgst": false, 00:35:53.225 "ddgst": false 00:35:53.225 }, 00:35:53.225 "method": "bdev_nvme_attach_controller" 00:35:53.225 }' 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.225 18:24:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.483 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:53.483 ... 00:35:53.483 fio-3.35 00:35:53.483 Starting 3 threads 00:35:53.483 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.034 00:36:00.034 filename0: (groupid=0, jobs=1): err= 0: pid=2517149: Tue Jul 23 18:24:06 2024 00:36:00.034 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(132MiB/5005msec) 00:36:00.034 slat (nsec): min=5076, max=39322, avg=16358.34, stdev=5187.29 00:36:00.034 clat (usec): min=4312, max=55711, avg=14235.95, stdev=11549.54 00:36:00.034 lat (usec): min=4324, max=55724, avg=14252.31, stdev=11549.32 00:36:00.034 clat percentiles (usec): 00:36:00.034 | 1.00th=[ 5080], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 8160], 00:36:00.034 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[11731], 60.00th=[12649], 00:36:00.034 | 70.00th=[13304], 80.00th=[13960], 90.00th=[15926], 95.00th=[51643], 00:36:00.034 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:36:00.034 | 99.99th=[55837] 00:36:00.034 bw ( KiB/s): min=18432, max=36352, per=35.41%, avg=26905.60, stdev=5781.86, samples=10 00:36:00.034 iops : min= 144, max= 284, avg=210.20, stdev=45.17, samples=10 00:36:00.034 lat (msec) : 10=38.75%, 20=52.90%, 50=2.66%, 100=5.70% 00:36:00.034 cpu : usr=93.07%, sys=6.47%, ctx=9, majf=0, minf=104 00:36:00.034 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.034 issued rwts: total=1053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.034 filename0: (groupid=0, jobs=1): err= 0: pid=2517150: Tue Jul 23 18:24:06 2024 00:36:00.034 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(138MiB/5004msec) 00:36:00.034 slat (nsec): min=4828, max=73033, avg=14334.72, stdev=5050.37 00:36:00.034 clat (usec): min=4589, max=57196, avg=13627.91, stdev=7910.33 00:36:00.034 lat (usec): min=4601, max=57212, avg=13642.25, stdev=7910.34 00:36:00.034 clat percentiles (usec): 00:36:00.034 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 8586], 00:36:00.034 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[12256], 60.00th=[14091], 00:36:00.034 | 70.00th=[15795], 80.00th=[17957], 90.00th=[19792], 95.00th=[20579], 00:36:00.035 | 99.00th=[52167], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:36:00.035 | 99.99th=[57410] 00:36:00.035 bw ( KiB/s): min=16160, max=34048, per=36.97%, avg=28086.40, stdev=4930.49, samples=10 00:36:00.035 iops : min= 126, max= 266, avg=219.40, stdev=38.59, samples=10 00:36:00.035 lat (msec) : 10=32.27%, 20=59.55%, 50=6.45%, 100=1.73% 00:36:00.035 cpu : usr=92.88%, sys=6.68%, ctx=12, majf=0, minf=122 00:36:00.035 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.035 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.035 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.035 filename0: (groupid=0, jobs=1): err= 0: pid=2517151: Tue Jul 23 18:24:06 2024 00:36:00.035 read: IOPS=166, BW=20.9MiB/s (21.9MB/s)(105MiB/5046msec) 00:36:00.035 slat (nsec): min=5123, max=46050, avg=14386.28, stdev=5192.95 00:36:00.035 clat (usec): min=5309, max=55463, avg=17905.10, stdev=14601.06 00:36:00.035 lat (usec): min=5320, max=55476, avg=17919.49, stdev=14600.64 00:36:00.035 clat percentiles (usec): 00:36:00.035 | 1.00th=[ 5800], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9765], 00:36:00.035 | 30.00th=[11076], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:36:00.035 | 70.00th=[13829], 80.00th=[15008], 90.00th=[51119], 95.00th=[53216], 00:36:00.035 | 99.00th=[54789], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:36:00.035 | 99.99th=[55313] 00:36:00.035 bw ( KiB/s): min=13056, max=29952, per=28.27%, avg=21478.40, stdev=4896.75, samples=10 00:36:00.035 iops : min= 102, max= 234, avg=167.80, stdev=38.26, samples=10 00:36:00.035 lat (msec) : 10=23.28%, 20=61.28%, 50=3.44%, 100=12.00% 00:36:00.035 cpu : usr=93.26%, sys=6.28%, ctx=20, majf=0, minf=90 00:36:00.035 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.035 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.035 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.035 00:36:00.035 Run status group 0 (all jobs): 00:36:00.035 READ: bw=74.2MiB/s (77.8MB/s), 20.9MiB/s-27.5MiB/s (21.9MB/s-28.8MB/s), io=374MiB (393MB), run=5004-5046msec 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 bdev_null0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 [2024-07-23 18:24:06.818097] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 bdev_null1 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 bdev_null2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:00.035 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:00.035 { 00:36:00.035 "params": { 00:36:00.035 "name": "Nvme$subsystem", 00:36:00.035 "trtype": "$TEST_TRANSPORT", 00:36:00.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "$NVMF_PORT", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.036 "hdgst": ${hdgst:-false}, 00:36:00.036 "ddgst": ${ddgst:-false} 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 } 00:36:00.036 EOF 00:36:00.036 )") 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:00.036 { 00:36:00.036 "params": { 00:36:00.036 "name": "Nvme$subsystem", 00:36:00.036 "trtype": "$TEST_TRANSPORT", 00:36:00.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "$NVMF_PORT", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.036 "hdgst": ${hdgst:-false}, 00:36:00.036 "ddgst": ${ddgst:-false} 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 } 00:36:00.036 EOF 00:36:00.036 )") 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:00.036 { 00:36:00.036 "params": { 00:36:00.036 "name": "Nvme$subsystem", 00:36:00.036 "trtype": "$TEST_TRANSPORT", 00:36:00.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "$NVMF_PORT", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.036 "hdgst": ${hdgst:-false}, 00:36:00.036 "ddgst": ${ddgst:-false} 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 } 00:36:00.036 EOF 00:36:00.036 )") 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:00.036 "params": { 00:36:00.036 "name": "Nvme0", 00:36:00.036 "trtype": "tcp", 00:36:00.036 "traddr": "10.0.0.2", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "4420", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.036 "hdgst": false, 00:36:00.036 "ddgst": false 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 },{ 00:36:00.036 "params": { 00:36:00.036 "name": "Nvme1", 00:36:00.036 "trtype": "tcp", 00:36:00.036 "traddr": "10.0.0.2", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "4420", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:00.036 "hdgst": false, 00:36:00.036 "ddgst": false 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 },{ 00:36:00.036 "params": { 00:36:00.036 "name": "Nvme2", 00:36:00.036 "trtype": "tcp", 00:36:00.036 "traddr": "10.0.0.2", 00:36:00.036 "adrfam": "ipv4", 00:36:00.036 "trsvcid": "4420", 00:36:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:00.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:00.036 "hdgst": false, 00:36:00.036 "ddgst": false 00:36:00.036 }, 00:36:00.036 "method": "bdev_nvme_attach_controller" 00:36:00.036 }' 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:00.036 18:24:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.036 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:00.036 ... 00:36:00.036 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:00.036 ... 00:36:00.036 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:00.036 ... 00:36:00.036 fio-3.35 00:36:00.036 Starting 24 threads 00:36:00.036 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.232 00:36:12.232 filename0: (groupid=0, jobs=1): err= 0: pid=2518281: Tue Jul 23 18:24:18 2024 00:36:12.232 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.232 slat (usec): min=8, max=128, avg=46.48, stdev=21.37 00:36:12.232 clat (usec): min=30057, max=55981, avg=33153.23, stdev=1355.09 00:36:12.232 lat (usec): min=30093, max=56004, avg=33199.71, stdev=1352.53 00:36:12.232 clat percentiles (usec): 00:36:12.232 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.232 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.232 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.232 | 99.00th=[39060], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:36:12.232 | 99.99th=[55837] 00:36:12.232 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.232 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.232 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.232 cpu : usr=95.03%, sys=3.09%, ctx=197, majf=0, minf=35 00:36:12.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.232 filename0: (groupid=0, jobs=1): err= 0: pid=2518282: Tue Jul 23 18:24:18 2024 00:36:12.232 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10019msec) 00:36:12.232 slat (nsec): min=8294, max=98018, avg=33261.21, stdev=19882.64 00:36:12.232 clat (usec): min=27583, max=85490, avg=33302.99, stdev=2817.52 00:36:12.232 lat (usec): min=27624, max=85530, avg=33336.25, stdev=2816.65 00:36:12.232 clat percentiles (usec): 00:36:12.232 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:12.232 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.232 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.232 | 99.00th=[37487], 99.50th=[43779], 99.90th=[79168], 99.95th=[79168], 00:36:12.232 | 99.99th=[85459] 00:36:12.232 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=79.51, samples=20 00:36:12.232 iops : min= 416, max= 512, avg=475.20, stdev=19.88, samples=20 00:36:12.232 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.232 cpu : usr=97.56%, sys=1.58%, ctx=89, majf=0, minf=27 00:36:12.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.232 filename0: (groupid=0, jobs=1): err= 0: pid=2518283: Tue Jul 23 18:24:18 2024 00:36:12.232 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10018msec) 00:36:12.232 slat (usec): min=11, max=148, avg=44.99, stdev=20.68 00:36:12.232 clat (usec): min=31974, max=78681, avg=33169.14, stdev=2752.63 00:36:12.232 lat (usec): min=32000, max=78706, avg=33214.13, stdev=2751.86 00:36:12.232 clat percentiles (usec): 00:36:12.232 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.232 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:36:12.232 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:36:12.232 | 99.00th=[39060], 99.50th=[41681], 99.90th=[78119], 99.95th=[79168], 00:36:12.232 | 99.99th=[79168] 00:36:12.232 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1899.80, stdev=75.02, samples=20 00:36:12.232 iops : min= 416, max= 512, avg=474.95, stdev=18.75, samples=20 00:36:12.232 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.232 cpu : usr=97.21%, sys=1.96%, ctx=90, majf=0, minf=30 00:36:12.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.232 filename0: (groupid=0, jobs=1): err= 0: pid=2518284: Tue Jul 23 18:24:18 2024 00:36:12.232 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10014msec) 00:36:12.232 slat (usec): min=5, max=118, avg=36.38, stdev=26.68 00:36:12.232 clat (usec): min=7844, max=53684, avg=32770.06, stdev=2730.90 00:36:12.232 lat (usec): min=7898, max=53740, avg=32806.43, stdev=2726.81 00:36:12.232 clat percentiles (usec): 00:36:12.232 | 1.00th=[18220], 5.00th=[31589], 10.00th=[32375], 20.00th=[32637], 00:36:12.232 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.232 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.232 | 99.00th=[40109], 99.50th=[43779], 99.90th=[45876], 99.95th=[50070], 00:36:12.232 | 99.99th=[53740] 00:36:12.232 bw ( KiB/s): min= 1792, max= 2276, per=4.22%, avg=1931.40, stdev=86.00, samples=20 00:36:12.232 iops : min= 448, max= 569, avg=482.85, stdev=21.50, samples=20 00:36:12.232 lat (msec) : 10=0.12%, 20=1.57%, 50=98.25%, 100=0.06% 00:36:12.232 cpu : usr=97.69%, sys=1.71%, ctx=86, majf=0, minf=31 00:36:12.232 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:12.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.232 issued rwts: total=4844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.232 filename0: (groupid=0, jobs=1): err= 0: pid=2518285: Tue Jul 23 18:24:18 2024 00:36:12.232 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10027msec) 00:36:12.232 slat (nsec): min=8606, max=89577, avg=30516.34, stdev=12779.46 00:36:12.232 clat (usec): min=25672, max=56552, avg=33269.82, stdev=1648.59 00:36:12.232 lat (usec): min=25704, max=56574, avg=33300.34, stdev=1648.65 00:36:12.232 clat percentiles (usec): 00:36:12.232 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:12.232 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.233 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.233 | 99.00th=[39584], 99.50th=[43254], 99.90th=[56361], 99.95th=[56361], 00:36:12.233 | 99.99th=[56361] 00:36:12.233 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1907.20, stdev=82.01, samples=20 00:36:12.233 iops : min= 416, max= 512, avg=476.80, stdev=20.50, samples=20 00:36:12.233 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.233 cpu : usr=97.21%, sys=1.92%, ctx=59, majf=0, minf=31 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename0: (groupid=0, jobs=1): err= 0: pid=2518286: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:36:12.233 slat (usec): min=5, max=110, avg=25.32, stdev= 9.23 00:36:12.233 clat (msec): min=12, max=105, avg=33.37, stdev= 3.21 00:36:12.233 lat (msec): min=12, max=105, avg=33.40, stdev= 3.20 00:36:12.233 clat percentiles (msec): 00:36:12.233 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:12.233 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:36:12.233 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:36:12.233 | 99.00th=[ 41], 99.50th=[ 44], 99.90th=[ 81], 99.95th=[ 81], 00:36:12.233 | 99.99th=[ 106] 00:36:12.233 bw ( KiB/s): min= 1664, max= 2039, per=4.15%, avg=1900.35, stdev=74.25, samples=20 00:36:12.233 iops : min= 416, max= 509, avg=475.05, stdev=18.49, samples=20 00:36:12.233 lat (msec) : 20=0.13%, 50=99.45%, 100=0.38%, 250=0.04% 00:36:12.233 cpu : usr=96.48%, sys=2.30%, ctx=159, majf=0, minf=35 00:36:12.233 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename0: (groupid=0, jobs=1): err= 0: pid=2518287: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.233 slat (usec): min=8, max=106, avg=19.58, stdev=14.30 00:36:12.233 clat (usec): min=25805, max=56069, avg=33362.28, stdev=1336.61 00:36:12.233 lat (usec): min=25815, max=56082, avg=33381.86, stdev=1337.93 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:12.233 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:12.233 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.233 | 99.00th=[39584], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:36:12.233 | 99.99th=[55837] 00:36:12.233 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.233 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.233 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.233 cpu : usr=97.91%, sys=1.69%, ctx=16, majf=0, minf=49 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename0: (groupid=0, jobs=1): err= 0: pid=2518288: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:36:12.233 slat (usec): min=9, max=111, avg=42.77, stdev=16.50 00:36:12.233 clat (usec): min=32026, max=70714, avg=33176.62, stdev=2308.57 00:36:12.233 lat (usec): min=32052, max=70753, avg=33219.39, stdev=2308.37 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.233 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:36:12.233 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.233 | 99.00th=[39060], 99.50th=[41681], 99.90th=[70779], 99.95th=[70779], 00:36:12.233 | 99.99th=[70779] 00:36:12.233 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:36:12.233 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:36:12.233 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.233 cpu : usr=96.97%, sys=2.01%, ctx=73, majf=0, minf=28 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename1: (groupid=0, jobs=1): err= 0: pid=2518289: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:36:12.233 slat (usec): min=8, max=148, avg=42.10, stdev=30.28 00:36:12.233 clat (usec): min=15640, max=99324, avg=32904.14, stdev=5550.79 00:36:12.233 lat (usec): min=15711, max=99361, avg=32946.25, stdev=5554.22 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[20841], 5.00th=[22676], 10.00th=[27132], 20.00th=[30016], 00:36:12.233 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.233 | 70.00th=[33424], 80.00th=[34341], 90.00th=[36963], 95.00th=[42206], 00:36:12.233 | 99.00th=[49546], 99.50th=[50594], 99.90th=[72877], 99.95th=[72877], 00:36:12.233 | 99.99th=[99091] 00:36:12.233 bw ( KiB/s): min= 1648, max= 2160, per=4.22%, avg=1929.26, stdev=116.28, samples=19 00:36:12.233 iops : min= 412, max= 540, avg=482.32, stdev=29.07, samples=19 00:36:12.233 lat (msec) : 20=0.70%, 50=98.57%, 100=0.72% 00:36:12.233 cpu : usr=97.85%, sys=1.51%, ctx=50, majf=0, minf=32 00:36:12.233 IO depths : 1=0.1%, 2=1.7%, 4=8.5%, 8=74.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=90.2%, 8=6.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename1: (groupid=0, jobs=1): err= 0: pid=2518290: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.233 slat (usec): min=11, max=145, avg=42.55, stdev=17.95 00:36:12.233 clat (usec): min=31949, max=51647, avg=33144.93, stdev=1332.40 00:36:12.233 lat (usec): min=31984, max=51680, avg=33187.49, stdev=1331.00 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.233 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.233 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.233 | 99.00th=[39584], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:36:12.233 | 99.99th=[51643] 00:36:12.233 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.233 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.233 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.233 cpu : usr=96.86%, sys=2.07%, ctx=152, majf=0, minf=25 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename1: (groupid=0, jobs=1): err= 0: pid=2518291: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:36:12.233 slat (usec): min=12, max=153, avg=43.96, stdev=18.56 00:36:12.233 clat (usec): min=32041, max=76124, avg=33172.49, stdev=2608.06 00:36:12.233 lat (usec): min=32058, max=76163, avg=33216.45, stdev=2607.61 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.233 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:36:12.233 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:36:12.233 | 99.00th=[39584], 99.50th=[41681], 99.90th=[76022], 99.95th=[76022], 00:36:12.233 | 99.99th=[76022] 00:36:12.233 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1900.40, stdev=62.53, samples=20 00:36:12.233 iops : min= 416, max= 480, avg=475.10, stdev=15.63, samples=20 00:36:12.233 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.233 cpu : usr=98.25%, sys=1.22%, ctx=46, majf=0, minf=31 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.233 filename1: (groupid=0, jobs=1): err= 0: pid=2518292: Tue Jul 23 18:24:18 2024 00:36:12.233 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.233 slat (usec): min=9, max=113, avg=41.36, stdev=18.85 00:36:12.233 clat (usec): min=31944, max=51795, avg=33207.29, stdev=1324.37 00:36:12.233 lat (usec): min=31987, max=51817, avg=33248.65, stdev=1322.15 00:36:12.233 clat percentiles (usec): 00:36:12.233 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:12.233 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.233 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.233 | 99.00th=[39060], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:36:12.233 | 99.99th=[51643] 00:36:12.233 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.233 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.233 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.233 cpu : usr=95.92%, sys=2.43%, ctx=213, majf=0, minf=28 00:36:12.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.233 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename1: (groupid=0, jobs=1): err= 0: pid=2518293: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=480, BW=1920KiB/s (1966kB/s)(18.8MiB/10033msec) 00:36:12.234 slat (usec): min=8, max=108, avg=27.12, stdev=22.50 00:36:12.234 clat (usec): min=8316, max=43907, avg=33087.52, stdev=2072.29 00:36:12.234 lat (usec): min=8363, max=43923, avg=33114.64, stdev=2068.65 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:12.234 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:12.234 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.234 | 99.00th=[37487], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:36:12.234 | 99.99th=[43779] 00:36:12.234 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.15, stdev=58.73, samples=20 00:36:12.234 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:36:12.234 lat (msec) : 10=0.33%, 20=0.33%, 50=99.34% 00:36:12.234 cpu : usr=97.85%, sys=1.73%, ctx=14, majf=0, minf=67 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename1: (groupid=0, jobs=1): err= 0: pid=2518294: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:36:12.234 slat (usec): min=9, max=177, avg=54.54, stdev=24.49 00:36:12.234 clat (usec): min=31345, max=68389, avg=33060.77, stdev=2201.61 00:36:12.234 lat (usec): min=31442, max=68433, avg=33115.30, stdev=2199.68 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:12.234 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:12.234 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:36:12.234 | 99.00th=[39060], 99.50th=[41681], 99.90th=[68682], 99.95th=[68682], 00:36:12.234 | 99.99th=[68682] 00:36:12.234 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:36:12.234 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:36:12.234 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.234 cpu : usr=96.14%, sys=2.37%, ctx=239, majf=0, minf=34 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename1: (groupid=0, jobs=1): err= 0: pid=2518295: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=475, BW=1902KiB/s (1948kB/s)(18.6MiB/10025msec) 00:36:12.234 slat (usec): min=9, max=140, avg=45.24, stdev=15.53 00:36:12.234 clat (usec): min=26186, max=86219, avg=33237.02, stdev=3178.86 00:36:12.234 lat (usec): min=26197, max=86241, avg=33282.26, stdev=3177.53 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.234 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.234 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.234 | 99.00th=[39584], 99.50th=[41681], 99.90th=[86508], 99.95th=[86508], 00:36:12.234 | 99.99th=[86508] 00:36:12.234 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1900.80, stdev=85.87, samples=20 00:36:12.234 iops : min= 384, max= 480, avg=475.20, stdev=21.47, samples=20 00:36:12.234 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.234 cpu : usr=97.94%, sys=1.64%, ctx=21, majf=0, minf=22 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename1: (groupid=0, jobs=1): err= 0: pid=2518296: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.7MiB/10031msec) 00:36:12.234 slat (usec): min=4, max=118, avg=35.66, stdev=24.17 00:36:12.234 clat (usec): min=29611, max=63952, avg=33243.09, stdev=1896.14 00:36:12.234 lat (usec): min=29653, max=63966, avg=33278.75, stdev=1893.07 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:12.234 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.234 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.234 | 99.00th=[35390], 99.50th=[40109], 99.90th=[63701], 99.95th=[63701], 00:36:12.234 | 99.99th=[63701] 00:36:12.234 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1907.20, stdev=70.72, samples=20 00:36:12.234 iops : min= 416, max= 512, avg=476.80, stdev=17.68, samples=20 00:36:12.234 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.234 cpu : usr=98.15%, sys=1.45%, ctx=21, majf=0, minf=31 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename2: (groupid=0, jobs=1): err= 0: pid=2518297: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:36:12.234 slat (usec): min=12, max=106, avg=42.31, stdev=13.83 00:36:12.234 clat (usec): min=26123, max=68152, avg=33185.49, stdev=2171.19 00:36:12.234 lat (usec): min=26139, max=68198, avg=33227.80, stdev=2171.34 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:12.234 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.234 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.234 | 99.00th=[39584], 99.50th=[41681], 99.90th=[67634], 99.95th=[67634], 00:36:12.234 | 99.99th=[67634] 00:36:12.234 bw ( KiB/s): min= 1660, max= 1920, per=4.15%, avg=1899.58, stdev=65.00, samples=19 00:36:12.234 iops : min= 415, max= 480, avg=474.89, stdev=16.25, samples=19 00:36:12.234 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.234 cpu : usr=94.55%, sys=3.22%, ctx=243, majf=0, minf=30 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename2: (groupid=0, jobs=1): err= 0: pid=2518298: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10034msec) 00:36:12.234 slat (usec): min=4, max=110, avg=27.59, stdev=18.28 00:36:12.234 clat (usec): min=6670, max=43909, avg=33097.68, stdev=2140.70 00:36:12.234 lat (usec): min=6676, max=43931, avg=33125.26, stdev=2140.26 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[29492], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:12.234 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:12.234 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.234 | 99.00th=[37487], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:36:12.234 | 99.99th=[43779] 00:36:12.234 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.00, stdev=58.73, samples=20 00:36:12.234 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:36:12.234 lat (msec) : 10=0.33%, 20=0.33%, 50=99.34% 00:36:12.234 cpu : usr=98.25%, sys=1.36%, ctx=20, majf=0, minf=51 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename2: (groupid=0, jobs=1): err= 0: pid=2518299: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10022msec) 00:36:12.234 slat (usec): min=6, max=117, avg=49.44, stdev=17.25 00:36:12.234 clat (usec): min=29860, max=86189, avg=33180.39, stdev=2987.31 00:36:12.234 lat (usec): min=29890, max=86204, avg=33229.83, stdev=2985.17 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.234 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:36:12.234 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:12.234 | 99.00th=[39060], 99.50th=[41681], 99.90th=[82314], 99.95th=[82314], 00:36:12.234 | 99.99th=[86508] 00:36:12.234 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1900.80, stdev=62.64, samples=20 00:36:12.234 iops : min= 416, max= 480, avg=475.20, stdev=15.66, samples=20 00:36:12.234 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.234 cpu : usr=98.26%, sys=1.33%, ctx=13, majf=0, minf=23 00:36:12.234 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.234 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.234 filename2: (groupid=0, jobs=1): err= 0: pid=2518300: Tue Jul 23 18:24:18 2024 00:36:12.234 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.234 slat (usec): min=8, max=113, avg=33.75, stdev=16.22 00:36:12.234 clat (usec): min=27672, max=51577, avg=33266.54, stdev=1316.55 00:36:12.234 lat (usec): min=27684, max=51610, avg=33300.29, stdev=1315.04 00:36:12.234 clat percentiles (usec): 00:36:12.234 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:12.234 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:12.235 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.235 | 99.00th=[39060], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:36:12.235 | 99.99th=[51643] 00:36:12.235 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.235 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.235 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.235 cpu : usr=96.51%, sys=2.23%, ctx=153, majf=0, minf=32 00:36:12.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.235 filename2: (groupid=0, jobs=1): err= 0: pid=2518301: Tue Jul 23 18:24:18 2024 00:36:12.235 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:36:12.235 slat (usec): min=8, max=104, avg=41.49, stdev=21.38 00:36:12.235 clat (usec): min=25641, max=72468, avg=33201.93, stdev=2469.40 00:36:12.235 lat (usec): min=25650, max=72504, avg=33243.43, stdev=2469.18 00:36:12.235 clat percentiles (usec): 00:36:12.235 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:12.235 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.235 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.235 | 99.00th=[39060], 99.50th=[43254], 99.90th=[71828], 99.95th=[72877], 00:36:12.235 | 99.99th=[72877] 00:36:12.235 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:36:12.235 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:36:12.235 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.235 cpu : usr=98.01%, sys=1.58%, ctx=14, majf=0, minf=22 00:36:12.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:12.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.235 filename2: (groupid=0, jobs=1): err= 0: pid=2518302: Tue Jul 23 18:24:18 2024 00:36:12.235 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:12.235 slat (nsec): min=10362, max=86536, avg=39925.19, stdev=14440.32 00:36:12.235 clat (usec): min=29665, max=51442, avg=33203.65, stdev=1333.04 00:36:12.235 lat (usec): min=29676, max=51458, avg=33243.57, stdev=1329.78 00:36:12.235 clat percentiles (usec): 00:36:12.235 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:12.235 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:12.235 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.235 | 99.00th=[39584], 99.50th=[41681], 99.90th=[51119], 99.95th=[51643], 00:36:12.235 | 99.99th=[51643] 00:36:12.235 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1907.20, stdev=57.24, samples=20 00:36:12.235 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:36:12.235 lat (msec) : 50=99.67%, 100=0.33% 00:36:12.235 cpu : usr=98.01%, sys=1.56%, ctx=13, majf=0, minf=28 00:36:12.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.235 filename2: (groupid=0, jobs=1): err= 0: pid=2518303: Tue Jul 23 18:24:18 2024 00:36:12.235 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10023msec) 00:36:12.235 slat (usec): min=12, max=164, avg=46.65, stdev=20.80 00:36:12.235 clat (usec): min=29723, max=83549, avg=33203.04, stdev=3022.12 00:36:12.235 lat (usec): min=29765, max=83572, avg=33249.69, stdev=3020.71 00:36:12.235 clat percentiles (usec): 00:36:12.235 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:12.235 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:36:12.235 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.235 | 99.00th=[38536], 99.50th=[41157], 99.90th=[83362], 99.95th=[83362], 00:36:12.235 | 99.99th=[83362] 00:36:12.235 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1900.80, stdev=62.64, samples=20 00:36:12.235 iops : min= 416, max= 480, avg=475.20, stdev=15.66, samples=20 00:36:12.235 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.235 cpu : usr=97.95%, sys=1.50%, ctx=37, majf=0, minf=31 00:36:12.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.235 filename2: (groupid=0, jobs=1): err= 0: pid=2518304: Tue Jul 23 18:24:18 2024 00:36:12.235 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10019msec) 00:36:12.235 slat (usec): min=8, max=113, avg=26.41, stdev=16.44 00:36:12.235 clat (usec): min=27534, max=85499, avg=33372.42, stdev=2801.54 00:36:12.235 lat (usec): min=27581, max=85521, avg=33398.84, stdev=2800.76 00:36:12.235 clat percentiles (usec): 00:36:12.235 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:12.235 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:12.235 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:12.235 | 99.00th=[37487], 99.50th=[43779], 99.90th=[78119], 99.95th=[79168], 00:36:12.235 | 99.99th=[85459] 00:36:12.235 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=75.15, samples=20 00:36:12.235 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:36:12.235 lat (msec) : 50=99.66%, 100=0.34% 00:36:12.235 cpu : usr=96.79%, sys=2.15%, ctx=136, majf=0, minf=31 00:36:12.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:12.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.235 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:12.235 00:36:12.235 Run status group 0 (all jobs): 00:36:12.235 READ: bw=44.7MiB/s (46.9MB/s), 1902KiB/s-1935KiB/s (1948kB/s-1981kB/s), io=448MiB (470MB), run=10007-10034msec 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:12.235 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 bdev_null0 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 [2024-07-23 18:24:18.433515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 bdev_null1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:12.236 { 00:36:12.236 "params": { 00:36:12.236 "name": "Nvme$subsystem", 00:36:12.236 "trtype": "$TEST_TRANSPORT", 00:36:12.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.236 "adrfam": "ipv4", 00:36:12.236 "trsvcid": "$NVMF_PORT", 00:36:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.236 "hdgst": ${hdgst:-false}, 00:36:12.236 "ddgst": ${ddgst:-false} 00:36:12.236 }, 00:36:12.236 "method": "bdev_nvme_attach_controller" 00:36:12.236 } 00:36:12.236 EOF 00:36:12.236 )") 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:12.236 { 00:36:12.236 "params": { 00:36:12.236 "name": "Nvme$subsystem", 00:36:12.236 "trtype": "$TEST_TRANSPORT", 00:36:12.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.236 "adrfam": "ipv4", 00:36:12.236 "trsvcid": "$NVMF_PORT", 00:36:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.236 "hdgst": ${hdgst:-false}, 00:36:12.236 "ddgst": ${ddgst:-false} 00:36:12.236 }, 00:36:12.236 "method": "bdev_nvme_attach_controller" 00:36:12.236 } 00:36:12.236 EOF 00:36:12.236 )") 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:12.236 "params": { 00:36:12.236 "name": "Nvme0", 00:36:12.236 "trtype": "tcp", 00:36:12.236 "traddr": "10.0.0.2", 00:36:12.236 "adrfam": "ipv4", 00:36:12.236 "trsvcid": "4420", 00:36:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.236 "hdgst": false, 00:36:12.236 "ddgst": false 00:36:12.236 }, 00:36:12.236 "method": "bdev_nvme_attach_controller" 00:36:12.236 },{ 00:36:12.236 "params": { 00:36:12.236 "name": "Nvme1", 00:36:12.236 "trtype": "tcp", 00:36:12.236 "traddr": "10.0.0.2", 00:36:12.236 "adrfam": "ipv4", 00:36:12.236 "trsvcid": "4420", 00:36:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:12.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:12.236 "hdgst": false, 00:36:12.236 "ddgst": false 00:36:12.236 }, 00:36:12.236 "method": "bdev_nvme_attach_controller" 00:36:12.236 }' 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:12.236 18:24:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.236 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:12.236 ... 00:36:12.237 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:12.237 ... 00:36:12.237 fio-3.35 00:36:12.237 Starting 4 threads 00:36:12.237 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.496 00:36:17.496 filename0: (groupid=0, jobs=1): err= 0: pid=2519940: Tue Jul 23 18:24:24 2024 00:36:17.496 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:36:17.496 slat (usec): min=3, max=152, avg=21.77, stdev=11.71 00:36:17.496 clat (usec): min=823, max=11879, avg=4180.75, stdev=579.36 00:36:17.496 lat (usec): min=845, max=11893, avg=4202.52, stdev=579.72 00:36:17.496 clat percentiles (usec): 00:36:17.496 | 1.00th=[ 2409], 5.00th=[ 3326], 10.00th=[ 3687], 20.00th=[ 3982], 00:36:17.496 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:17.496 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5014], 00:36:17.496 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7439], 99.95th=[11863], 00:36:17.496 | 99.99th=[11863] 00:36:17.496 bw ( KiB/s): min=14592, max=15792, per=25.15%, avg=15022.40, stdev=362.23, samples=10 00:36:17.496 iops : min= 1824, max= 1974, avg=1877.80, stdev=45.28, samples=10 00:36:17.496 lat (usec) : 1000=0.06% 00:36:17.496 lat (msec) : 2=0.51%, 4=20.25%, 10=79.09%, 20=0.09% 00:36:17.496 cpu : usr=87.72%, sys=7.28%, ctx=219, majf=0, minf=9 00:36:17.496 IO depths : 1=0.7%, 2=17.0%, 4=56.5%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 issued rwts: total=9396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.496 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.496 filename0: (groupid=0, jobs=1): err= 0: pid=2519941: Tue Jul 23 18:24:24 2024 00:36:17.496 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:36:17.496 slat (nsec): min=4561, max=91494, avg=19436.78, stdev=10703.20 00:36:17.496 clat (usec): min=909, max=8204, avg=4250.75, stdev=549.64 00:36:17.496 lat (usec): min=927, max=8216, avg=4270.18, stdev=549.14 00:36:17.496 clat percentiles (usec): 00:36:17.496 | 1.00th=[ 2671], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 4015], 00:36:17.496 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:17.496 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:36:17.496 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 7635], 00:36:17.496 | 99.99th=[ 8225] 00:36:17.496 bw ( KiB/s): min=14304, max=15136, per=24.79%, avg=14810.90, stdev=256.92, samples=10 00:36:17.496 iops : min= 1788, max= 1892, avg=1851.30, stdev=32.17, samples=10 00:36:17.496 lat (usec) : 1000=0.06% 00:36:17.496 lat (msec) : 2=0.28%, 4=17.32%, 10=82.34% 00:36:17.496 cpu : usr=94.40%, sys=5.08%, ctx=6, majf=0, minf=9 00:36:17.496 IO depths : 1=0.2%, 2=14.3%, 4=58.7%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 issued rwts: total=9263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.496 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.496 filename1: (groupid=0, jobs=1): err= 0: pid=2519942: Tue Jul 23 18:24:24 2024 00:36:17.496 read: IOPS=1846, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:36:17.496 slat (nsec): min=4057, max=92339, avg=19607.83, stdev=10671.03 00:36:17.496 clat (usec): min=623, max=7876, avg=4261.73, stdev=569.79 00:36:17.496 lat (usec): min=631, max=7898, avg=4281.34, stdev=569.27 00:36:17.496 clat percentiles (usec): 00:36:17.496 | 1.00th=[ 2704], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4015], 00:36:17.496 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:17.496 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5211], 00:36:17.496 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7635], 00:36:17.496 | 99.99th=[ 7898] 00:36:17.496 bw ( KiB/s): min=14624, max=14976, per=24.74%, avg=14776.89, stdev=127.02, samples=9 00:36:17.496 iops : min= 1828, max= 1872, avg=1847.11, stdev=15.88, samples=9 00:36:17.496 lat (usec) : 750=0.02%, 1000=0.01% 00:36:17.496 lat (msec) : 2=0.48%, 4=16.72%, 10=82.77% 00:36:17.496 cpu : usr=94.80%, sys=4.66%, ctx=10, majf=0, minf=9 00:36:17.496 IO depths : 1=0.1%, 2=15.7%, 4=57.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 issued rwts: total=9233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.496 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.496 filename1: (groupid=0, jobs=1): err= 0: pid=2519943: Tue Jul 23 18:24:24 2024 00:36:17.496 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5002msec) 00:36:17.496 slat (nsec): min=5150, max=88747, avg=17194.96, stdev=10155.44 00:36:17.496 clat (usec): min=928, max=8620, avg=4169.84, stdev=533.29 00:36:17.496 lat (usec): min=941, max=8648, avg=4187.03, stdev=533.83 00:36:17.496 clat percentiles (usec): 00:36:17.496 | 1.00th=[ 2442], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3949], 00:36:17.496 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:36:17.496 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:36:17.496 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 8225], 00:36:17.496 | 99.99th=[ 8586] 00:36:17.496 bw ( KiB/s): min=14480, max=15648, per=25.33%, avg=15131.20, stdev=371.43, samples=10 00:36:17.496 iops : min= 1810, max= 1956, avg=1891.40, stdev=46.43, samples=10 00:36:17.496 lat (usec) : 1000=0.04% 00:36:17.496 lat (msec) : 2=0.42%, 4=22.73%, 10=76.81% 00:36:17.496 cpu : usr=94.38%, sys=5.12%, ctx=9, majf=0, minf=9 00:36:17.496 IO depths : 1=0.4%, 2=12.4%, 4=59.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.496 issued rwts: total=9465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.496 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.496 00:36:17.496 Run status group 0 (all jobs): 00:36:17.496 READ: bw=58.3MiB/s (61.2MB/s), 14.4MiB/s-14.8MiB/s (15.1MB/s-15.5MB/s), io=292MiB (306MB), run=5001-5003msec 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.496 00:36:17.496 real 0m24.155s 00:36:17.496 user 4m30.922s 00:36:17.496 sys 0m7.571s 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 ************************************ 00:36:17.496 END TEST fio_dif_rand_params 00:36:17.496 ************************************ 00:36:17.496 18:24:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:17.496 18:24:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:17.496 18:24:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:17.496 18:24:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:17.496 18:24:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 ************************************ 00:36:17.496 START TEST fio_dif_digest 00:36:17.497 ************************************ 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.497 bdev_null0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.497 [2024-07-23 18:24:24.857252] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:17.497 { 00:36:17.497 "params": { 00:36:17.497 "name": "Nvme$subsystem", 00:36:17.497 "trtype": "$TEST_TRANSPORT", 00:36:17.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.497 "adrfam": "ipv4", 00:36:17.497 "trsvcid": "$NVMF_PORT", 00:36:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.497 "hdgst": ${hdgst:-false}, 00:36:17.497 "ddgst": ${ddgst:-false} 00:36:17.497 }, 00:36:17.497 "method": "bdev_nvme_attach_controller" 00:36:17.497 } 00:36:17.497 EOF 00:36:17.497 )") 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:17.497 "params": { 00:36:17.497 "name": "Nvme0", 00:36:17.497 "trtype": "tcp", 00:36:17.497 "traddr": "10.0.0.2", 00:36:17.497 "adrfam": "ipv4", 00:36:17.497 "trsvcid": "4420", 00:36:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.497 "hdgst": true, 00:36:17.497 "ddgst": true 00:36:17.497 }, 00:36:17.497 "method": "bdev_nvme_attach_controller" 00:36:17.497 }' 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.497 18:24:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.497 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:17.497 ... 00:36:17.497 fio-3.35 00:36:17.497 Starting 3 threads 00:36:17.497 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.741 00:36:29.741 filename0: (groupid=0, jobs=1): err= 0: pid=2520695: Tue Jul 23 18:24:35 2024 00:36:29.741 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(218MiB/10051msec) 00:36:29.741 slat (nsec): min=4470, max=50093, avg=15386.67, stdev=2077.34 00:36:29.741 clat (usec): min=8595, max=56388, avg=17273.55, stdev=2282.01 00:36:29.741 lat (usec): min=8609, max=56404, avg=17288.94, stdev=2282.12 00:36:29.741 clat percentiles (usec): 00:36:29.741 | 1.00th=[ 9503], 5.00th=[15008], 10.00th=[15795], 20.00th=[16319], 00:36:29.741 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:36:29.741 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19006], 95.00th=[19530], 00:36:29.741 | 99.00th=[21627], 99.50th=[23200], 99.90th=[51643], 99.95th=[56361], 00:36:29.741 | 99.99th=[56361] 00:36:29.742 bw ( KiB/s): min=20777, max=24064, per=30.55%, avg=22248.45, stdev=909.72, samples=20 00:36:29.742 iops : min= 162, max= 188, avg=173.80, stdev= 7.13, samples=20 00:36:29.742 lat (msec) : 10=2.13%, 20=94.66%, 50=3.10%, 100=0.11% 00:36:29.742 cpu : usr=92.13%, sys=7.10%, ctx=240, majf=0, minf=141 00:36:29.742 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.742 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.742 filename0: (groupid=0, jobs=1): err= 0: pid=2520696: Tue Jul 23 18:24:35 2024 00:36:29.742 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(280MiB/10009msec) 00:36:29.742 slat (nsec): min=4497, max=43553, avg=14387.57, stdev=3020.06 00:36:29.742 clat (usec): min=9211, max=54428, avg=13391.24, stdev=3407.33 00:36:29.742 lat (usec): min=9226, max=54443, avg=13405.63, stdev=3407.40 00:36:29.742 clat percentiles (usec): 00:36:29.742 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:36:29.742 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:36:29.742 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:36:29.742 | 99.00th=[16057], 99.50th=[52167], 99.90th=[53740], 99.95th=[54264], 00:36:29.742 | 99.99th=[54264] 00:36:29.742 bw ( KiB/s): min=25600, max=30976, per=39.30%, avg=28620.80, stdev=1423.17, samples=20 00:36:29.742 iops : min= 200, max= 242, avg=223.60, stdev=11.12, samples=20 00:36:29.742 lat (msec) : 10=0.27%, 20=99.06%, 100=0.67% 00:36:29.742 cpu : usr=93.12%, sys=6.38%, ctx=23, majf=0, minf=209 00:36:29.742 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.742 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.742 filename0: (groupid=0, jobs=1): err= 0: pid=2520697: Tue Jul 23 18:24:35 2024 00:36:29.742 read: IOPS=173, BW=21.6MiB/s (22.7MB/s)(217MiB/10049msec) 00:36:29.742 slat (nsec): min=5194, max=29393, avg=14381.42, stdev=1275.36 00:36:29.742 clat (usec): min=8454, max=59022, avg=17293.48, stdev=2320.64 00:36:29.742 lat (usec): min=8468, max=59036, avg=17307.86, stdev=2320.71 00:36:29.742 clat percentiles (usec): 00:36:29.742 | 1.00th=[ 9503], 5.00th=[14877], 10.00th=[15664], 20.00th=[16188], 00:36:29.742 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:36:29.742 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19268], 95.00th=[20055], 00:36:29.742 | 99.00th=[21103], 99.50th=[21627], 99.90th=[50594], 99.95th=[58983], 00:36:29.742 | 99.99th=[58983] 00:36:29.742 bw ( KiB/s): min=20992, max=24320, per=30.51%, avg=22222.90, stdev=927.96, samples=20 00:36:29.742 iops : min= 164, max= 190, avg=173.60, stdev= 7.27, samples=20 00:36:29.742 lat (msec) : 10=1.96%, 20=93.67%, 50=4.26%, 100=0.12% 00:36:29.742 cpu : usr=93.45%, sys=6.09%, ctx=18, majf=0, minf=146 00:36:29.742 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.742 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.742 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:29.742 00:36:29.742 Run status group 0 (all jobs): 00:36:29.742 READ: bw=71.1MiB/s (74.6MB/s), 21.6MiB/s-28.0MiB/s (22.7MB/s-29.3MB/s), io=715MiB (750MB), run=10009-10051msec 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.742 00:36:29.742 real 0m11.114s 00:36:29.742 user 0m29.162s 00:36:29.742 sys 0m2.242s 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:29.742 18:24:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 ************************************ 00:36:29.742 END TEST fio_dif_digest 00:36:29.742 ************************************ 00:36:29.742 18:24:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:29.742 18:24:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:29.742 18:24:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:29.742 18:24:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:29.742 rmmod nvme_tcp 00:36:29.742 rmmod nvme_fabrics 00:36:29.742 rmmod nvme_keyring 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2514161 ']' 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2514161 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2514161 ']' 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2514161 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2514161 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2514161' 00:36:29.742 killing process with pid 2514161 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2514161 00:36:29.742 18:24:36 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2514161 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:29.742 18:24:36 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.742 Waiting for block devices as requested 00:36:30.000 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:30.000 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.000 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.259 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.259 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.259 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.517 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.517 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.517 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.517 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.775 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.775 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.775 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.775 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:31.033 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:31.033 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:31.033 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:31.291 18:24:38 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:31.291 18:24:38 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:31.291 18:24:38 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:31.291 18:24:38 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:31.291 18:24:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.291 18:24:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.291 18:24:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.195 18:24:40 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:33.195 00:36:33.195 real 1m6.415s 00:36:33.195 user 6m24.377s 00:36:33.195 sys 0m20.375s 00:36:33.195 18:24:40 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.195 18:24:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:33.195 ************************************ 00:36:33.195 END TEST nvmf_dif 00:36:33.195 ************************************ 00:36:33.195 18:24:40 -- common/autotest_common.sh@1142 -- # return 0 00:36:33.195 18:24:40 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.195 18:24:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.195 18:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.195 18:24:40 -- common/autotest_common.sh@10 -- # set +x 00:36:33.195 ************************************ 00:36:33.195 START TEST nvmf_abort_qd_sizes 00:36:33.195 ************************************ 00:36:33.195 18:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.196 * Looking for test storage... 00:36:33.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.196 18:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.196 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:33.453 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.453 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.453 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.453 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:33.454 18:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:35.353 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:35.353 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.353 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:35.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:35.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:35.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:35.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:36:35.354 00:36:35.354 --- 10.0.0.2 ping statistics --- 00:36:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.354 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:35.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:35.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:36:35.354 00:36:35.354 --- 10.0.0.1 ping statistics --- 00:36:35.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.354 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:35.354 18:24:42 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:36.726 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:36.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:36.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:37.661 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:37.919 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.919 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:37.919 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2525498 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2525498 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2525498 ']' 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.920 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.920 [2024-07-23 18:24:45.494860] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:36:37.920 [2024-07-23 18:24:45.494946] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.920 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.920 [2024-07-23 18:24:45.559441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.178 [2024-07-23 18:24:45.650863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.178 [2024-07-23 18:24:45.650921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.178 [2024-07-23 18:24:45.650951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.178 [2024-07-23 18:24:45.650963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.178 [2024-07-23 18:24:45.650972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.178 [2024-07-23 18:24:45.651061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.178 [2024-07-23 18:24:45.651129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.178 [2024-07-23 18:24:45.651160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:38.178 [2024-07-23 18:24:45.651161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:38.178 18:24:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:38.436 ************************************ 00:36:38.436 START TEST spdk_target_abort 00:36:38.436 ************************************ 00:36:38.436 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:38.436 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:38.436 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:38.436 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.436 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.714 spdk_targetn1 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.714 [2024-07-23 18:24:48.676206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.714 [2024-07-23 18:24:48.708516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.714 18:24:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.714 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.236 Initializing NVMe Controllers 00:36:44.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.236 Initialization complete. Launching workers. 00:36:44.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12173, failed: 0 00:36:44.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 10975 00:36:44.236 success 713, unsuccess 485, failed 0 00:36:44.236 18:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.236 18:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.236 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.513 Initializing NVMe Controllers 00:36:47.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.513 Initialization complete. Launching workers. 00:36:47.513 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8667, failed: 0 00:36:47.513 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7437 00:36:47.513 success 348, unsuccess 882, failed 0 00:36:47.513 18:24:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.513 18:24:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.513 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.791 Initializing NVMe Controllers 00:36:50.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.791 Initialization complete. Launching workers. 00:36:50.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31852, failed: 0 00:36:50.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2789, failed to submit 29063 00:36:50.791 success 514, unsuccess 2275, failed 0 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.791 18:24:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2525498 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2525498 ']' 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2525498 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525498 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525498' 00:36:52.197 killing process with pid 2525498 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2525498 00:36:52.197 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2525498 00:36:52.455 00:36:52.455 real 0m14.158s 00:36:52.455 user 0m53.801s 00:36:52.455 sys 0m2.477s 00:36:52.455 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:52.455 18:24:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.455 ************************************ 00:36:52.455 END TEST spdk_target_abort 00:36:52.455 ************************************ 00:36:52.455 18:25:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:52.455 18:25:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:52.455 18:25:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:52.455 18:25:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:52.455 18:25:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:52.455 ************************************ 00:36:52.455 START TEST kernel_target_abort 00:36:52.455 ************************************ 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:52.455 18:25:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:53.831 Waiting for block devices as requested 00:36:53.831 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:53.832 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:53.832 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:54.089 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:54.089 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:54.089 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:54.089 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:54.347 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:54.347 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:54.347 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:54.347 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:54.604 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:54.604 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:54.604 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:54.604 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:54.862 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:54.862 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:54.862 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:55.119 No valid GPT data, bailing 00:36:55.119 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:55.119 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:55.120 00:36:55.120 Discovery Log Number of Records 2, Generation counter 2 00:36:55.120 =====Discovery Log Entry 0====== 00:36:55.120 trtype: tcp 00:36:55.120 adrfam: ipv4 00:36:55.120 subtype: current discovery subsystem 00:36:55.120 treq: not specified, sq flow control disable supported 00:36:55.120 portid: 1 00:36:55.120 trsvcid: 4420 00:36:55.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:55.120 traddr: 10.0.0.1 00:36:55.120 eflags: none 00:36:55.120 sectype: none 00:36:55.120 =====Discovery Log Entry 1====== 00:36:55.120 trtype: tcp 00:36:55.120 adrfam: ipv4 00:36:55.120 subtype: nvme subsystem 00:36:55.120 treq: not specified, sq flow control disable supported 00:36:55.120 portid: 1 00:36:55.120 trsvcid: 4420 00:36:55.120 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:55.120 traddr: 10.0.0.1 00:36:55.120 eflags: none 00:36:55.120 sectype: none 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.120 18:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.120 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.397 Initializing NVMe Controllers 00:36:58.397 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.397 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.397 Initialization complete. Launching workers. 00:36:58.397 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52705, failed: 0 00:36:58.397 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52705, failed to submit 0 00:36:58.397 success 0, unsuccess 52705, failed 0 00:36:58.397 18:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:58.397 18:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.397 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.675 Initializing NVMe Controllers 00:37:01.675 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.675 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.675 Initialization complete. Launching workers. 00:37:01.675 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95110, failed: 0 00:37:01.675 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23990, failed to submit 71120 00:37:01.675 success 0, unsuccess 23990, failed 0 00:37:01.675 18:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:01.675 18:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.675 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.954 Initializing NVMe Controllers 00:37:04.954 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.954 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.954 Initialization complete. Launching workers. 00:37:04.954 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90718, failed: 0 00:37:04.954 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22682, failed to submit 68036 00:37:04.954 success 0, unsuccess 22682, failed 0 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:04.954 18:25:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:04.954 18:25:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:05.520 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:05.520 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:05.520 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:05.520 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:05.520 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:05.520 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:05.779 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:05.779 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:05.779 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:06.715 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:06.715 00:37:06.715 real 0m14.282s 00:37:06.715 user 0m6.272s 00:37:06.715 sys 0m3.254s 00:37:06.715 18:25:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:06.715 18:25:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:06.715 ************************************ 00:37:06.715 END TEST kernel_target_abort 00:37:06.715 ************************************ 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:06.715 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:06.715 rmmod nvme_tcp 00:37:06.715 rmmod nvme_fabrics 00:37:06.974 rmmod nvme_keyring 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2525498 ']' 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2525498 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2525498 ']' 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2525498 00:37:06.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2525498) - No such process 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2525498 is not found' 00:37:06.974 Process with pid 2525498 is not found 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:06.974 18:25:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.908 Waiting for block devices as requested 00:37:07.908 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:08.166 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:08.166 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:08.424 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:08.424 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:08.424 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:08.424 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:08.683 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:08.683 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:08.683 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:08.683 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:08.941 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:08.941 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:08.941 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:09.198 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:09.198 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:09.198 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:09.456 18:25:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.355 18:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:11.355 00:37:11.355 real 0m38.082s 00:37:11.355 user 1m2.256s 00:37:11.355 sys 0m9.131s 00:37:11.355 18:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:11.355 18:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:11.355 ************************************ 00:37:11.355 END TEST nvmf_abort_qd_sizes 00:37:11.355 ************************************ 00:37:11.355 18:25:18 -- common/autotest_common.sh@1142 -- # return 0 00:37:11.355 18:25:18 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:11.355 18:25:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:11.355 18:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.355 18:25:18 -- common/autotest_common.sh@10 -- # set +x 00:37:11.355 ************************************ 00:37:11.355 START TEST keyring_file 00:37:11.355 ************************************ 00:37:11.355 18:25:18 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:11.355 * Looking for test storage... 00:37:11.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.355 18:25:18 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.355 18:25:18 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.355 18:25:18 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.355 18:25:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.355 18:25:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.355 18:25:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.355 18:25:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:11.355 18:25:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:11.355 18:25:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qwXi87cMBE 00:37:11.355 18:25:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:11.355 18:25:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.356 18:25:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.356 18:25:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:11.356 18:25:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:11.356 18:25:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qwXi87cMBE 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qwXi87cMBE 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qwXi87cMBE 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f29G80ATSc 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:11.614 18:25:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f29G80ATSc 00:37:11.614 18:25:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f29G80ATSc 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.f29G80ATSc 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=2531251 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:11.614 18:25:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2531251 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2531251 ']' 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.614 18:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.614 [2024-07-23 18:25:19.128356] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:37:11.614 [2024-07-23 18:25:19.128440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531251 ] 00:37:11.614 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.614 [2024-07-23 18:25:19.186778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.614 [2024-07-23 18:25:19.272209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.872 18:25:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:11.872 18:25:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:11.872 18:25:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:11.872 18:25:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.872 18:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.872 [2024-07-23 18:25:19.516669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.131 null0 00:37:12.131 [2024-07-23 18:25:19.548739] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:12.131 [2024-07-23 18:25:19.549226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:12.131 [2024-07-23 18:25:19.556725] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.131 18:25:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:12.131 [2024-07-23 18:25:19.568733] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:12.131 request: 00:37:12.131 { 00:37:12.131 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.131 "secure_channel": false, 00:37:12.131 "listen_address": { 00:37:12.131 "trtype": "tcp", 00:37:12.131 "traddr": "127.0.0.1", 00:37:12.131 "trsvcid": "4420" 00:37:12.131 }, 00:37:12.131 "method": "nvmf_subsystem_add_listener", 00:37:12.131 "req_id": 1 00:37:12.131 } 00:37:12.131 Got JSON-RPC error response 00:37:12.131 response: 00:37:12.131 { 00:37:12.131 "code": -32602, 00:37:12.131 "message": "Invalid parameters" 00:37:12.131 } 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.131 18:25:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=2531260 00:37:12.131 18:25:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2531260 /var/tmp/bperf.sock 00:37:12.131 18:25:19 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2531260 ']' 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:12.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.131 18:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:12.131 [2024-07-23 18:25:19.615624] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:37:12.131 [2024-07-23 18:25:19.615693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531260 ] 00:37:12.131 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.131 [2024-07-23 18:25:19.683477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.131 [2024-07-23 18:25:19.778166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.395 18:25:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.395 18:25:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:12.395 18:25:19 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:12.395 18:25:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:12.707 18:25:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f29G80ATSc 00:37:12.707 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f29G80ATSc 00:37:12.965 18:25:20 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:12.965 18:25:20 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:12.965 18:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.965 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.965 18:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.223 18:25:20 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.qwXi87cMBE == \/\t\m\p\/\t\m\p\.\q\w\X\i\8\7\c\M\B\E ]] 00:37:13.223 18:25:20 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:13.223 18:25:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:13.223 18:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.223 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.223 18:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.223 18:25:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.f29G80ATSc == \/\t\m\p\/\t\m\p\.\f\2\9\G\8\0\A\T\S\c ]] 00:37:13.480 18:25:20 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:13.480 18:25:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.480 18:25:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.480 18:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.480 18:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.480 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.480 18:25:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:13.480 18:25:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:13.480 18:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.480 18:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.480 18:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.480 18:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.480 18:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.738 18:25:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:13.738 18:25:21 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.738 18:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.995 [2024-07-23 18:25:21.618180] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:14.253 nvme0n1 00:37:14.253 18:25:21 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:14.253 18:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:14.253 18:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.253 18:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.253 18:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.253 18:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.511 18:25:21 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:14.511 18:25:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:14.511 18:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:14.511 18:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.511 18:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.511 18:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.511 18:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:14.769 18:25:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:14.769 18:25:22 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:14.769 Running I/O for 1 seconds... 00:37:15.702 00:37:15.702 Latency(us) 00:37:15.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.702 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:15.702 nvme0n1 : 1.01 9388.27 36.67 0.00 0.00 13579.49 4514.70 21456.97 00:37:15.702 =================================================================================================================== 00:37:15.702 Total : 9388.27 36.67 0.00 0.00 13579.49 4514.70 21456.97 00:37:15.702 0 00:37:15.702 18:25:23 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:15.702 18:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:15.958 18:25:23 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:15.958 18:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.958 18:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.958 18:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.958 18:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.958 18:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.214 18:25:23 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:16.214 18:25:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:16.214 18:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.214 18:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.214 18:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.214 18:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.214 18:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.471 18:25:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:16.471 18:25:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.471 18:25:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:16.471 18:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:16.728 [2024-07-23 18:25:24.326983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:16.728 [2024-07-23 18:25:24.327268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6018f0 (107): Transport endpoint is not connected 00:37:16.728 [2024-07-23 18:25:24.328259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6018f0 (9): Bad file descriptor 00:37:16.728 [2024-07-23 18:25:24.329259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:16.728 [2024-07-23 18:25:24.329278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:16.728 [2024-07-23 18:25:24.329305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:16.728 request: 00:37:16.728 { 00:37:16.728 "name": "nvme0", 00:37:16.728 "trtype": "tcp", 00:37:16.728 "traddr": "127.0.0.1", 00:37:16.728 "adrfam": "ipv4", 00:37:16.728 "trsvcid": "4420", 00:37:16.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.728 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.728 "prchk_reftag": false, 00:37:16.728 "prchk_guard": false, 00:37:16.728 "hdgst": false, 00:37:16.728 "ddgst": false, 00:37:16.728 "psk": "key1", 00:37:16.728 "method": "bdev_nvme_attach_controller", 00:37:16.728 "req_id": 1 00:37:16.728 } 00:37:16.728 Got JSON-RPC error response 00:37:16.728 response: 00:37:16.728 { 00:37:16.728 "code": -5, 00:37:16.728 "message": "Input/output error" 00:37:16.728 } 00:37:16.728 18:25:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:16.728 18:25:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:16.728 18:25:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:16.728 18:25:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:16.728 18:25:24 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:16.728 18:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.728 18:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.728 18:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.728 18:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.728 18:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.986 18:25:24 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:16.986 18:25:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:16.986 18:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.986 18:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.986 18:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.986 18:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.986 18:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.242 18:25:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:17.242 18:25:24 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:17.242 18:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:17.499 18:25:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:17.499 18:25:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:17.756 18:25:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:17.756 18:25:25 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:17.756 18:25:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.013 18:25:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:18.013 18:25:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.qwXi87cMBE 00:37:18.013 18:25:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.013 18:25:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.013 18:25:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.271 [2024-07-23 18:25:25.829375] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qwXi87cMBE': 0100660 00:37:18.271 [2024-07-23 18:25:25.829414] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:18.271 request: 00:37:18.271 { 00:37:18.271 "name": "key0", 00:37:18.271 "path": "/tmp/tmp.qwXi87cMBE", 00:37:18.271 "method": "keyring_file_add_key", 00:37:18.271 "req_id": 1 00:37:18.271 } 00:37:18.271 Got JSON-RPC error response 00:37:18.271 response: 00:37:18.271 { 00:37:18.271 "code": -1, 00:37:18.271 "message": "Operation not permitted" 00:37:18.271 } 00:37:18.271 18:25:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:18.271 18:25:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:18.271 18:25:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:18.271 18:25:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:18.271 18:25:25 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.qwXi87cMBE 00:37:18.271 18:25:25 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.271 18:25:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qwXi87cMBE 00:37:18.528 18:25:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.qwXi87cMBE 00:37:18.528 18:25:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:18.528 18:25:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.528 18:25:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.528 18:25:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.528 18:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.528 18:25:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.785 18:25:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:18.785 18:25:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.785 18:25:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.785 18:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.042 [2024-07-23 18:25:26.575415] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qwXi87cMBE': No such file or directory 00:37:19.042 [2024-07-23 18:25:26.575447] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:19.042 [2024-07-23 18:25:26.575489] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:19.042 [2024-07-23 18:25:26.575502] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:19.042 [2024-07-23 18:25:26.575513] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:19.042 request: 00:37:19.042 { 00:37:19.042 "name": "nvme0", 00:37:19.042 "trtype": "tcp", 00:37:19.042 "traddr": "127.0.0.1", 00:37:19.042 "adrfam": "ipv4", 00:37:19.042 "trsvcid": "4420", 00:37:19.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.042 "prchk_reftag": false, 00:37:19.042 "prchk_guard": false, 00:37:19.042 "hdgst": false, 00:37:19.042 "ddgst": false, 00:37:19.042 "psk": "key0", 00:37:19.042 "method": "bdev_nvme_attach_controller", 00:37:19.042 "req_id": 1 00:37:19.042 } 00:37:19.042 Got JSON-RPC error response 00:37:19.042 response: 00:37:19.042 { 00:37:19.042 "code": -19, 00:37:19.042 "message": "No such device" 00:37:19.042 } 00:37:19.042 18:25:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:19.042 18:25:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:19.042 18:25:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:19.042 18:25:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:19.042 18:25:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:19.042 18:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:19.299 18:25:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.R9shHWVfzS 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:19.299 18:25:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:19.299 18:25:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.R9shHWVfzS 00:37:19.300 18:25:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.R9shHWVfzS 00:37:19.300 18:25:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.R9shHWVfzS 00:37:19.300 18:25:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R9shHWVfzS 00:37:19.300 18:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R9shHWVfzS 00:37:19.557 18:25:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.557 18:25:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.814 nvme0n1 00:37:19.814 18:25:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:19.814 18:25:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.814 18:25:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.814 18:25:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.814 18:25:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.814 18:25:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.071 18:25:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:20.071 18:25:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:20.071 18:25:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:20.329 18:25:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:20.329 18:25:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:20.329 18:25:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.329 18:25:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.329 18:25:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.587 18:25:28 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:20.587 18:25:28 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:20.587 18:25:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.587 18:25:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.587 18:25:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.587 18:25:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.587 18:25:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.844 18:25:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:20.844 18:25:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:20.844 18:25:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:21.102 18:25:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:21.102 18:25:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:21.102 18:25:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.359 18:25:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:21.359 18:25:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R9shHWVfzS 00:37:21.359 18:25:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R9shHWVfzS 00:37:21.617 18:25:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f29G80ATSc 00:37:21.617 18:25:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f29G80ATSc 00:37:21.874 18:25:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:21.874 18:25:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:22.132 nvme0n1 00:37:22.132 18:25:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:22.132 18:25:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:22.390 18:25:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:22.390 "subsystems": [ 00:37:22.390 { 00:37:22.390 "subsystem": "keyring", 00:37:22.390 "config": [ 00:37:22.390 { 00:37:22.390 "method": "keyring_file_add_key", 00:37:22.390 "params": { 00:37:22.390 "name": "key0", 00:37:22.390 "path": "/tmp/tmp.R9shHWVfzS" 00:37:22.390 } 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "method": "keyring_file_add_key", 00:37:22.390 "params": { 00:37:22.390 "name": "key1", 00:37:22.390 "path": "/tmp/tmp.f29G80ATSc" 00:37:22.390 } 00:37:22.390 } 00:37:22.390 ] 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "subsystem": "iobuf", 00:37:22.390 "config": [ 00:37:22.390 { 00:37:22.390 "method": "iobuf_set_options", 00:37:22.390 "params": { 00:37:22.390 "small_pool_count": 8192, 00:37:22.390 "large_pool_count": 1024, 00:37:22.390 "small_bufsize": 8192, 00:37:22.390 "large_bufsize": 135168 00:37:22.390 } 00:37:22.390 } 00:37:22.390 ] 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "subsystem": "sock", 00:37:22.390 "config": [ 00:37:22.390 { 00:37:22.390 "method": "sock_set_default_impl", 00:37:22.390 "params": { 00:37:22.390 "impl_name": "posix" 00:37:22.390 } 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "method": "sock_impl_set_options", 00:37:22.390 "params": { 00:37:22.390 "impl_name": "ssl", 00:37:22.390 "recv_buf_size": 4096, 00:37:22.390 "send_buf_size": 4096, 00:37:22.390 "enable_recv_pipe": true, 00:37:22.390 "enable_quickack": false, 00:37:22.390 "enable_placement_id": 0, 00:37:22.390 "enable_zerocopy_send_server": true, 00:37:22.390 "enable_zerocopy_send_client": false, 00:37:22.390 "zerocopy_threshold": 0, 00:37:22.390 "tls_version": 0, 00:37:22.390 "enable_ktls": false 00:37:22.390 } 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "method": "sock_impl_set_options", 00:37:22.390 "params": { 00:37:22.390 "impl_name": "posix", 00:37:22.390 "recv_buf_size": 2097152, 00:37:22.390 "send_buf_size": 2097152, 00:37:22.390 "enable_recv_pipe": true, 00:37:22.390 "enable_quickack": false, 00:37:22.390 "enable_placement_id": 0, 00:37:22.390 "enable_zerocopy_send_server": true, 00:37:22.390 "enable_zerocopy_send_client": false, 00:37:22.390 "zerocopy_threshold": 0, 00:37:22.390 "tls_version": 0, 00:37:22.390 "enable_ktls": false 00:37:22.390 } 00:37:22.390 } 00:37:22.390 ] 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "subsystem": "vmd", 00:37:22.390 "config": [] 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "subsystem": "accel", 00:37:22.390 "config": [ 00:37:22.390 { 00:37:22.390 "method": "accel_set_options", 00:37:22.390 "params": { 00:37:22.390 "small_cache_size": 128, 00:37:22.390 "large_cache_size": 16, 00:37:22.390 "task_count": 2048, 00:37:22.390 "sequence_count": 2048, 00:37:22.390 "buf_count": 2048 00:37:22.390 } 00:37:22.390 } 00:37:22.390 ] 00:37:22.390 }, 00:37:22.390 { 00:37:22.390 "subsystem": "bdev", 00:37:22.390 "config": [ 00:37:22.390 { 00:37:22.390 "method": "bdev_set_options", 00:37:22.391 "params": { 00:37:22.391 "bdev_io_pool_size": 65535, 00:37:22.391 "bdev_io_cache_size": 256, 00:37:22.391 "bdev_auto_examine": true, 00:37:22.391 "iobuf_small_cache_size": 128, 00:37:22.391 "iobuf_large_cache_size": 16 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_raid_set_options", 00:37:22.391 "params": { 00:37:22.391 "process_window_size_kb": 1024, 00:37:22.391 "process_max_bandwidth_mb_sec": 0 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_iscsi_set_options", 00:37:22.391 "params": { 00:37:22.391 "timeout_sec": 30 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_nvme_set_options", 00:37:22.391 "params": { 00:37:22.391 "action_on_timeout": "none", 00:37:22.391 "timeout_us": 0, 00:37:22.391 "timeout_admin_us": 0, 00:37:22.391 "keep_alive_timeout_ms": 10000, 00:37:22.391 "arbitration_burst": 0, 00:37:22.391 "low_priority_weight": 0, 00:37:22.391 "medium_priority_weight": 0, 00:37:22.391 "high_priority_weight": 0, 00:37:22.391 "nvme_adminq_poll_period_us": 10000, 00:37:22.391 "nvme_ioq_poll_period_us": 0, 00:37:22.391 "io_queue_requests": 512, 00:37:22.391 "delay_cmd_submit": true, 00:37:22.391 "transport_retry_count": 4, 00:37:22.391 "bdev_retry_count": 3, 00:37:22.391 "transport_ack_timeout": 0, 00:37:22.391 "ctrlr_loss_timeout_sec": 0, 00:37:22.391 "reconnect_delay_sec": 0, 00:37:22.391 "fast_io_fail_timeout_sec": 0, 00:37:22.391 "disable_auto_failback": false, 00:37:22.391 "generate_uuids": false, 00:37:22.391 "transport_tos": 0, 00:37:22.391 "nvme_error_stat": false, 00:37:22.391 "rdma_srq_size": 0, 00:37:22.391 "io_path_stat": false, 00:37:22.391 "allow_accel_sequence": false, 00:37:22.391 "rdma_max_cq_size": 0, 00:37:22.391 "rdma_cm_event_timeout_ms": 0, 00:37:22.391 "dhchap_digests": [ 00:37:22.391 "sha256", 00:37:22.391 "sha384", 00:37:22.391 "sha512" 00:37:22.391 ], 00:37:22.391 "dhchap_dhgroups": [ 00:37:22.391 "null", 00:37:22.391 "ffdhe2048", 00:37:22.391 "ffdhe3072", 00:37:22.391 "ffdhe4096", 00:37:22.391 "ffdhe6144", 00:37:22.391 "ffdhe8192" 00:37:22.391 ] 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_nvme_attach_controller", 00:37:22.391 "params": { 00:37:22.391 "name": "nvme0", 00:37:22.391 "trtype": "TCP", 00:37:22.391 "adrfam": "IPv4", 00:37:22.391 "traddr": "127.0.0.1", 00:37:22.391 "trsvcid": "4420", 00:37:22.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.391 "prchk_reftag": false, 00:37:22.391 "prchk_guard": false, 00:37:22.391 "ctrlr_loss_timeout_sec": 0, 00:37:22.391 "reconnect_delay_sec": 0, 00:37:22.391 "fast_io_fail_timeout_sec": 0, 00:37:22.391 "psk": "key0", 00:37:22.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.391 "hdgst": false, 00:37:22.391 "ddgst": false 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_nvme_set_hotplug", 00:37:22.391 "params": { 00:37:22.391 "period_us": 100000, 00:37:22.391 "enable": false 00:37:22.391 } 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "method": "bdev_wait_for_examine" 00:37:22.391 } 00:37:22.391 ] 00:37:22.391 }, 00:37:22.391 { 00:37:22.391 "subsystem": "nbd", 00:37:22.391 "config": [] 00:37:22.391 } 00:37:22.391 ] 00:37:22.391 }' 00:37:22.391 18:25:30 keyring_file -- keyring/file.sh@114 -- # killprocess 2531260 00:37:22.391 18:25:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2531260 ']' 00:37:22.391 18:25:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2531260 00:37:22.391 18:25:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2531260 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2531260' 00:37:22.649 killing process with pid 2531260 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@967 -- # kill 2531260 00:37:22.649 Received shutdown signal, test time was about 1.000000 seconds 00:37:22.649 00:37:22.649 Latency(us) 00:37:22.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.649 =================================================================================================================== 00:37:22.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@972 -- # wait 2531260 00:37:22.649 18:25:30 keyring_file -- keyring/file.sh@117 -- # bperfpid=2532715 00:37:22.649 18:25:30 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2532715 /var/tmp/bperf.sock 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2532715 ']' 00:37:22.649 18:25:30 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:22.649 18:25:30 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.649 18:25:30 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:22.649 "subsystems": [ 00:37:22.649 { 00:37:22.649 "subsystem": "keyring", 00:37:22.649 "config": [ 00:37:22.649 { 00:37:22.649 "method": "keyring_file_add_key", 00:37:22.649 "params": { 00:37:22.649 "name": "key0", 00:37:22.649 "path": "/tmp/tmp.R9shHWVfzS" 00:37:22.649 } 00:37:22.649 }, 00:37:22.649 { 00:37:22.649 "method": "keyring_file_add_key", 00:37:22.649 "params": { 00:37:22.649 "name": "key1", 00:37:22.649 "path": "/tmp/tmp.f29G80ATSc" 00:37:22.649 } 00:37:22.649 } 00:37:22.649 ] 00:37:22.649 }, 00:37:22.649 { 00:37:22.649 "subsystem": "iobuf", 00:37:22.649 "config": [ 00:37:22.649 { 00:37:22.649 "method": "iobuf_set_options", 00:37:22.649 "params": { 00:37:22.649 "small_pool_count": 8192, 00:37:22.649 "large_pool_count": 1024, 00:37:22.649 "small_bufsize": 8192, 00:37:22.649 "large_bufsize": 135168 00:37:22.649 } 00:37:22.649 } 00:37:22.649 ] 00:37:22.649 }, 00:37:22.649 { 00:37:22.649 "subsystem": "sock", 00:37:22.649 "config": [ 00:37:22.649 { 00:37:22.649 "method": "sock_set_default_impl", 00:37:22.649 "params": { 00:37:22.649 "impl_name": "posix" 00:37:22.649 } 00:37:22.649 }, 00:37:22.649 { 00:37:22.649 "method": "sock_impl_set_options", 00:37:22.649 "params": { 00:37:22.649 "impl_name": "ssl", 00:37:22.649 "recv_buf_size": 4096, 00:37:22.649 "send_buf_size": 4096, 00:37:22.649 "enable_recv_pipe": true, 00:37:22.649 "enable_quickack": false, 00:37:22.649 "enable_placement_id": 0, 00:37:22.649 "enable_zerocopy_send_server": true, 00:37:22.649 "enable_zerocopy_send_client": false, 00:37:22.649 "zerocopy_threshold": 0, 00:37:22.649 "tls_version": 0, 00:37:22.649 "enable_ktls": false 00:37:22.649 } 00:37:22.649 }, 00:37:22.649 { 00:37:22.649 "method": "sock_impl_set_options", 00:37:22.649 "params": { 00:37:22.649 "impl_name": "posix", 00:37:22.649 "recv_buf_size": 2097152, 00:37:22.649 "send_buf_size": 2097152, 00:37:22.650 "enable_recv_pipe": true, 00:37:22.650 "enable_quickack": false, 00:37:22.650 "enable_placement_id": 0, 00:37:22.650 "enable_zerocopy_send_server": true, 00:37:22.650 "enable_zerocopy_send_client": false, 00:37:22.650 "zerocopy_threshold": 0, 00:37:22.650 "tls_version": 0, 00:37:22.650 "enable_ktls": false 00:37:22.650 } 00:37:22.650 } 00:37:22.650 ] 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "subsystem": "vmd", 00:37:22.650 "config": [] 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "subsystem": "accel", 00:37:22.650 "config": [ 00:37:22.650 { 00:37:22.650 "method": "accel_set_options", 00:37:22.650 "params": { 00:37:22.650 "small_cache_size": 128, 00:37:22.650 "large_cache_size": 16, 00:37:22.650 "task_count": 2048, 00:37:22.650 "sequence_count": 2048, 00:37:22.650 "buf_count": 2048 00:37:22.650 } 00:37:22.650 } 00:37:22.650 ] 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "subsystem": "bdev", 00:37:22.650 "config": [ 00:37:22.650 { 00:37:22.650 "method": "bdev_set_options", 00:37:22.650 "params": { 00:37:22.650 "bdev_io_pool_size": 65535, 00:37:22.650 "bdev_io_cache_size": 256, 00:37:22.650 "bdev_auto_examine": true, 00:37:22.650 "iobuf_small_cache_size": 128, 00:37:22.650 "iobuf_large_cache_size": 16 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_raid_set_options", 00:37:22.650 "params": { 00:37:22.650 "process_window_size_kb": 1024, 00:37:22.650 "process_max_bandwidth_mb_sec": 0 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_iscsi_set_options", 00:37:22.650 "params": { 00:37:22.650 "timeout_sec": 30 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_nvme_set_options", 00:37:22.650 "params": { 00:37:22.650 "action_on_timeout": "none", 00:37:22.650 "timeout_us": 0, 00:37:22.650 "timeout_admin_us": 0, 00:37:22.650 "keep_alive_timeout_ms": 10000, 00:37:22.650 "arbitration_burst": 0, 00:37:22.650 "low_priority_weight": 0, 00:37:22.650 "medium_priority_weight": 0, 00:37:22.650 "high_priority_weight": 0, 00:37:22.650 "nvme_adminq_poll_period_us": 10000, 00:37:22.650 "nvme_ioq_poll_period_us": 0, 00:37:22.650 "io_queue_requests": 512, 00:37:22.650 "delay_cmd_submit": true, 00:37:22.650 "transport_retry_count": 4, 00:37:22.650 "bdev_retry_count": 3, 00:37:22.650 "transport_ack_timeout": 0, 00:37:22.650 "ctrlr_loss_timeout_sec": 0, 00:37:22.650 "reconnect_delay_sec": 0, 00:37:22.650 "fast_io_fail_timeout_sec": 0, 00:37:22.650 "disable_auto_failback": false, 00:37:22.650 "generate_uuids": false, 00:37:22.650 "transport_tos": 0, 00:37:22.650 "nvme_error_stat": false, 00:37:22.650 "rdma_srq_size": 0, 00:37:22.650 "io_path_stat": false, 00:37:22.650 "allow_accel_sequence": false, 00:37:22.650 "rdma_max_cq_size": 0, 00:37:22.650 "rdma_cm_event_timeout_ms": 0, 00:37:22.650 "dhchap_digests": [ 00:37:22.650 "sha256", 00:37:22.650 "sha384", 00:37:22.650 "sha512" 00:37:22.650 ], 00:37:22.650 "dhchap_dhgroups": [ 00:37:22.650 "null", 00:37:22.650 "ffdhe2048", 00:37:22.650 "ffdhe3072", 00:37:22.650 "ffdhe4096", 00:37:22.650 "ffdhe6144", 00:37:22.650 "ffdhe8192" 00:37:22.650 ] 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_nvme_attach_controller", 00:37:22.650 "params": { 00:37:22.650 "name": "nvme0", 00:37:22.650 "trtype": "TCP", 00:37:22.650 "adrfam": "IPv4", 00:37:22.650 "traddr": "127.0.0.1", 00:37:22.650 "trsvcid": "4420", 00:37:22.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.650 "prchk_reftag": false, 00:37:22.650 "prchk_guard": false, 00:37:22.650 "ctrlr_loss_timeout_sec": 0, 00:37:22.650 "reconnect_delay_sec": 0, 00:37:22.650 "fast_io_fail_timeout_sec": 0, 00:37:22.650 "psk": "key0", 00:37:22.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.650 "hdgst": false, 00:37:22.650 "ddgst": false 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_nvme_set_hotplug", 00:37:22.650 "params": { 00:37:22.650 "period_us": 100000, 00:37:22.650 "enable": false 00:37:22.650 } 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "method": "bdev_wait_for_examine" 00:37:22.650 } 00:37:22.650 ] 00:37:22.650 }, 00:37:22.650 { 00:37:22.650 "subsystem": "nbd", 00:37:22.650 "config": [] 00:37:22.650 } 00:37:22.650 ] 00:37:22.650 }' 00:37:22.650 18:25:30 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:22.650 18:25:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.908 [2024-07-23 18:25:30.329842] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:37:22.908 [2024-07-23 18:25:30.329924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532715 ] 00:37:22.908 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.908 [2024-07-23 18:25:30.388822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.908 [2024-07-23 18:25:30.474183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.165 [2024-07-23 18:25:30.648265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:23.730 18:25:31 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:23.730 18:25:31 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:23.731 18:25:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:23.731 18:25:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.731 18:25:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:23.988 18:25:31 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:23.988 18:25:31 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:23.988 18:25:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.988 18:25:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.988 18:25:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.988 18:25:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.988 18:25:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.246 18:25:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:24.246 18:25:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:24.246 18:25:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:24.246 18:25:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.246 18:25:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.246 18:25:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.246 18:25:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:24.503 18:25:32 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:24.503 18:25:32 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:24.504 18:25:32 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:24.504 18:25:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:24.761 18:25:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:24.761 18:25:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:24.761 18:25:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.R9shHWVfzS /tmp/tmp.f29G80ATSc 00:37:24.761 18:25:32 keyring_file -- keyring/file.sh@20 -- # killprocess 2532715 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2532715 ']' 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2532715 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2532715 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2532715' 00:37:24.761 killing process with pid 2532715 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@967 -- # kill 2532715 00:37:24.761 Received shutdown signal, test time was about 1.000000 seconds 00:37:24.761 00:37:24.761 Latency(us) 00:37:24.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.761 =================================================================================================================== 00:37:24.761 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:24.761 18:25:32 keyring_file -- common/autotest_common.sh@972 -- # wait 2532715 00:37:25.020 18:25:32 keyring_file -- keyring/file.sh@21 -- # killprocess 2531251 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2531251 ']' 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2531251 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2531251 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2531251' 00:37:25.020 killing process with pid 2531251 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@967 -- # kill 2531251 00:37:25.020 [2024-07-23 18:25:32.551257] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:25.020 18:25:32 keyring_file -- common/autotest_common.sh@972 -- # wait 2531251 00:37:25.585 00:37:25.585 real 0m14.014s 00:37:25.585 user 0m35.110s 00:37:25.585 sys 0m3.239s 00:37:25.585 18:25:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:25.585 18:25:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:25.585 ************************************ 00:37:25.585 END TEST keyring_file 00:37:25.585 ************************************ 00:37:25.585 18:25:32 -- common/autotest_common.sh@1142 -- # return 0 00:37:25.585 18:25:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:25.585 18:25:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:25.585 18:25:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:25.585 18:25:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:25.585 18:25:32 -- common/autotest_common.sh@10 -- # set +x 00:37:25.585 ************************************ 00:37:25.585 START TEST keyring_linux 00:37:25.585 ************************************ 00:37:25.585 18:25:32 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:25.585 * Looking for test storage... 00:37:25.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:25.585 18:25:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:25.585 18:25:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:25.585 18:25:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:25.586 18:25:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:25.586 18:25:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.586 18:25:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.586 18:25:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.586 18:25:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.586 18:25:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.586 18:25:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:25.586 18:25:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:25.586 /tmp/:spdk-test:key0 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:25.586 18:25:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:25.586 18:25:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:25.586 /tmp/:spdk-test:key1 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2533080 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:25.586 18:25:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2533080 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2533080 ']' 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:25.586 18:25:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:25.586 [2024-07-23 18:25:33.181543] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:37:25.586 [2024-07-23 18:25:33.181658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533080 ] 00:37:25.586 EAL: No free 2048 kB hugepages reported on node 1 00:37:25.586 [2024-07-23 18:25:33.238743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.844 [2024-07-23 18:25:33.323777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:26.101 [2024-07-23 18:25:33.569808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:26.101 null0 00:37:26.101 [2024-07-23 18:25:33.601867] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:26.101 [2024-07-23 18:25:33.602339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:26.101 1010730642 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:26.101 345654123 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2533105 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2533105 /var/tmp/bperf.sock 00:37:26.101 18:25:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2533105 ']' 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:26.101 18:25:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:26.101 [2024-07-23 18:25:33.672356] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 23.11.0 initialization... 00:37:26.101 [2024-07-23 18:25:33.672453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533105 ] 00:37:26.101 EAL: No free 2048 kB hugepages reported on node 1 00:37:26.101 [2024-07-23 18:25:33.736127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.359 [2024-07-23 18:25:33.820680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.359 18:25:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:26.359 18:25:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:26.359 18:25:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:26.359 18:25:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:26.617 18:25:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:26.617 18:25:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:26.875 18:25:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:26.875 18:25:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:27.133 [2024-07-23 18:25:34.683264] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:27.133 nvme0n1 00:37:27.133 18:25:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:27.133 18:25:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:27.133 18:25:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:27.133 18:25:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:27.133 18:25:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.133 18:25:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:27.414 18:25:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:27.414 18:25:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:27.414 18:25:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:27.414 18:25:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:27.414 18:25:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.414 18:25:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.414 18:25:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@25 -- # sn=1010730642 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 1010730642 == \1\0\1\0\7\3\0\6\4\2 ]] 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1010730642 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:27.686 18:25:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:27.944 Running I/O for 1 seconds... 00:37:28.877 00:37:28.877 Latency(us) 00:37:28.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.877 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:28.877 nvme0n1 : 1.01 9763.61 38.14 0.00 0.00 13015.90 8349.77 21456.97 00:37:28.877 =================================================================================================================== 00:37:28.877 Total : 9763.61 38.14 0.00 0.00 13015.90 8349.77 21456.97 00:37:28.877 0 00:37:28.877 18:25:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:28.877 18:25:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:29.135 18:25:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:29.135 18:25:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:29.135 18:25:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:29.135 18:25:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:29.135 18:25:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:29.135 18:25:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.392 18:25:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:29.392 18:25:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:29.392 18:25:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:29.392 18:25:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:29.392 18:25:36 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:29.392 18:25:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:29.651 [2024-07-23 18:25:37.147473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:29.651 [2024-07-23 18:25:37.147851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6860 (107): Transport endpoint is not connected 00:37:29.651 [2024-07-23 18:25:37.148856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6860 (9): Bad file descriptor 00:37:29.651 [2024-07-23 18:25:37.149841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:29.651 [2024-07-23 18:25:37.149871] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:29.651 [2024-07-23 18:25:37.149899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:29.651 request: 00:37:29.651 { 00:37:29.651 "name": "nvme0", 00:37:29.651 "trtype": "tcp", 00:37:29.651 "traddr": "127.0.0.1", 00:37:29.651 "adrfam": "ipv4", 00:37:29.651 "trsvcid": "4420", 00:37:29.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.651 "prchk_reftag": false, 00:37:29.651 "prchk_guard": false, 00:37:29.651 "hdgst": false, 00:37:29.651 "ddgst": false, 00:37:29.651 "psk": ":spdk-test:key1", 00:37:29.651 "method": "bdev_nvme_attach_controller", 00:37:29.651 "req_id": 1 00:37:29.651 } 00:37:29.651 Got JSON-RPC error response 00:37:29.651 response: 00:37:29.651 { 00:37:29.651 "code": -5, 00:37:29.651 "message": "Input/output error" 00:37:29.651 } 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@33 -- # sn=1010730642 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1010730642 00:37:29.651 1 links removed 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@33 -- # sn=345654123 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 345654123 00:37:29.651 1 links removed 00:37:29.651 18:25:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2533105 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2533105 ']' 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2533105 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533105 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533105' 00:37:29.651 killing process with pid 2533105 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 2533105 00:37:29.651 Received shutdown signal, test time was about 1.000000 seconds 00:37:29.651 00:37:29.651 Latency(us) 00:37:29.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.651 =================================================================================================================== 00:37:29.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:29.651 18:25:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 2533105 00:37:29.909 18:25:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2533080 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2533080 ']' 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2533080 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533080 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533080' 00:37:29.909 killing process with pid 2533080 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 2533080 00:37:29.909 18:25:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 2533080 00:37:30.167 00:37:30.167 real 0m4.773s 00:37:30.167 user 0m9.218s 00:37:30.167 sys 0m1.637s 00:37:30.167 18:25:37 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:30.167 18:25:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:30.167 ************************************ 00:37:30.167 END TEST keyring_linux 00:37:30.167 ************************************ 00:37:30.167 18:25:37 -- common/autotest_common.sh@1142 -- # return 0 00:37:30.167 18:25:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:30.167 18:25:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:30.167 18:25:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:30.167 18:25:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:30.167 18:25:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:30.167 18:25:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:30.167 18:25:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:30.167 18:25:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:30.167 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:37:30.167 18:25:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:30.167 18:25:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:30.167 18:25:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:30.167 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:37:32.072 INFO: APP EXITING 00:37:32.072 INFO: killing all VMs 00:37:32.072 INFO: killing vhost app 00:37:32.072 INFO: EXIT DONE 00:37:33.450 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:33.450 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:33.450 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:33.450 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:33.450 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:33.450 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:33.450 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:33.450 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:33.450 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:33.450 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:33.450 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:33.450 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:33.450 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:33.450 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:33.450 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:33.450 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:33.450 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:34.825 Cleaning 00:37:34.825 Removing: /var/run/dpdk/spdk0/config 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:34.825 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:34.826 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:34.826 Removing: /var/run/dpdk/spdk1/config 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:34.826 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:34.826 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:34.826 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:34.826 Removing: /var/run/dpdk/spdk2/config 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:34.826 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:34.826 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:34.826 Removing: /var/run/dpdk/spdk3/config 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:34.826 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:34.826 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:34.826 Removing: /var/run/dpdk/spdk4/config 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:34.826 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:34.826 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:34.826 Removing: /dev/shm/bdev_svc_trace.1 00:37:34.826 Removing: /dev/shm/nvmf_trace.0 00:37:34.826 Removing: /dev/shm/spdk_tgt_trace.pid2215257 00:37:34.826 Removing: /var/run/dpdk/spdk0 00:37:34.826 Removing: /var/run/dpdk/spdk1 00:37:34.826 Removing: /var/run/dpdk/spdk2 00:37:34.826 Removing: /var/run/dpdk/spdk3 00:37:34.826 Removing: /var/run/dpdk/spdk4 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2213147 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2213936 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2215257 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2215696 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2216384 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2216520 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2217237 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2217254 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2217491 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2218691 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2219728 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2219917 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2220106 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2220426 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2220591 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2220773 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2220935 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2221113 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2221424 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2223774 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2223937 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224100 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224109 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224414 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224539 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224848 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2224972 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225141 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225151 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225321 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225446 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225809 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2225968 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226168 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226333 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226476 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226542 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226699 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2226973 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2227130 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2227287 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2227444 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2227711 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2227873 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228026 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228185 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228439 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228612 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228773 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2228926 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2229186 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2229358 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2229518 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2229672 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2229948 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2230110 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2230264 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2230452 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2230656 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2232728 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2235235 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2242132 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2242599 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2245613 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2245775 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2248399 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2252009 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2254187 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2260464 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2265675 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2266985 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2267657 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2277758 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2280159 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2333548 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2336719 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2340639 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2344972 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2344986 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2345519 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2346179 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2346828 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2347227 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2347240 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2347374 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2347509 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2347511 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2348167 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2348758 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2349358 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2349879 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2349886 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2350029 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2350905 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2351627 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2356954 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2381197 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2383976 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2385155 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2386352 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2386482 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2386619 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2386758 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2387069 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2388384 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2388987 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2389411 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2391151 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2391459 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2392518 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2394914 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2398161 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2401703 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2425192 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2427837 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2431717 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2432540 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2433632 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2436206 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2438552 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2442639 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2442686 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2445529 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2445674 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2445805 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2446071 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2446077 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2447147 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2448327 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2449509 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2450699 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2452034 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2453781 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2457464 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2457913 00:37:34.826 Removing: /var/run/dpdk/spdk_pid2459195 00:37:34.827 Removing: /var/run/dpdk/spdk_pid2459932 00:37:34.827 Removing: /var/run/dpdk/spdk_pid2463523 00:37:34.827 Removing: /var/run/dpdk/spdk_pid2465490 00:37:34.827 Removing: /var/run/dpdk/spdk_pid2468893 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2472211 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2478425 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2482637 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2482650 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2495055 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2495460 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2495983 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2496392 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2496969 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2497379 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2497783 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2498192 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2500683 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2500826 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2504623 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2504797 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2506406 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2511312 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2511319 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2514209 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2515608 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2517043 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2517853 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2519763 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2520518 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2525831 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2526188 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2526577 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2528136 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2528418 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2528809 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2531251 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2531260 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2532715 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2533080 00:37:35.085 Removing: /var/run/dpdk/spdk_pid2533105 00:37:35.085 Clean 00:37:35.085 18:25:42 -- common/autotest_common.sh@1451 -- # return 0 00:37:35.085 18:25:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:35.085 18:25:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:35.085 18:25:42 -- common/autotest_common.sh@10 -- # set +x 00:37:35.085 18:25:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:35.085 18:25:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:35.085 18:25:42 -- common/autotest_common.sh@10 -- # set +x 00:37:35.085 18:25:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:35.085 18:25:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:35.085 18:25:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:35.085 18:25:42 -- spdk/autotest.sh@391 -- # hash lcov 00:37:35.085 18:25:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:35.085 18:25:42 -- spdk/autotest.sh@393 -- # hostname 00:37:35.085 18:25:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:35.343 geninfo: WARNING: invalid characters removed from testname! 00:38:07.425 18:26:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.425 18:26:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:09.967 18:26:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:13.262 18:26:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:15.826 18:26:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:19.119 18:26:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:21.658 18:26:28 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:21.658 18:26:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.658 18:26:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:21.658 18:26:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.658 18:26:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.659 18:26:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.659 18:26:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.659 18:26:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.659 18:26:28 -- paths/export.sh@5 -- $ export PATH 00:38:21.659 18:26:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.659 18:26:28 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:21.659 18:26:28 -- common/autobuild_common.sh@447 -- $ date +%s 00:38:21.659 18:26:28 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721751988.XXXXXX 00:38:21.659 18:26:28 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721751988.jWM7j0 00:38:21.659 18:26:28 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:38:21.659 18:26:28 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:38:21.659 18:26:28 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:21.659 18:26:28 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:21.659 18:26:28 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:21.659 18:26:28 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:21.659 18:26:28 -- common/autobuild_common.sh@463 -- $ get_config_params 00:38:21.659 18:26:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:21.659 18:26:28 -- common/autotest_common.sh@10 -- $ set +x 00:38:21.659 18:26:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:21.659 18:26:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:38:21.659 18:26:29 -- pm/common@17 -- $ local monitor 00:38:21.659 18:26:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.659 18:26:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.659 18:26:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.659 18:26:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.659 18:26:29 -- pm/common@21 -- $ date +%s 00:38:21.659 18:26:29 -- pm/common@21 -- $ date +%s 00:38:21.659 18:26:29 -- pm/common@25 -- $ sleep 1 00:38:21.659 18:26:29 -- pm/common@21 -- $ date +%s 00:38:21.659 18:26:29 -- pm/common@21 -- $ date +%s 00:38:21.659 18:26:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721751989 00:38:21.659 18:26:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721751989 00:38:21.659 18:26:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721751989 00:38:21.659 18:26:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721751989 00:38:21.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721751989_collect-vmstat.pm.log 00:38:21.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721751989_collect-cpu-load.pm.log 00:38:21.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721751989_collect-cpu-temp.pm.log 00:38:21.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721751989_collect-bmc-pm.bmc.pm.log 00:38:22.599 18:26:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:38:22.599 18:26:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:22.599 18:26:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.599 18:26:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:22.599 18:26:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:22.599 18:26:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:22.599 18:26:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:22.599 18:26:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:22.599 18:26:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:22.599 18:26:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:22.599 18:26:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:22.599 18:26:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:22.599 18:26:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:22.599 18:26:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:22.599 18:26:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.599 18:26:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:22.599 18:26:30 -- pm/common@44 -- $ pid=2544315 00:38:22.599 18:26:30 -- pm/common@50 -- $ kill -TERM 2544315 00:38:22.599 18:26:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.600 18:26:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:22.600 18:26:30 -- pm/common@44 -- $ pid=2544317 00:38:22.600 18:26:30 -- pm/common@50 -- $ kill -TERM 2544317 00:38:22.600 18:26:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.600 18:26:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:22.600 18:26:30 -- pm/common@44 -- $ pid=2544319 00:38:22.600 18:26:30 -- pm/common@50 -- $ kill -TERM 2544319 00:38:22.600 18:26:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:22.600 18:26:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:22.600 18:26:30 -- pm/common@44 -- $ pid=2544345 00:38:22.600 18:26:30 -- pm/common@50 -- $ sudo -E kill -TERM 2544345 00:38:22.600 + [[ -n 2108597 ]] 00:38:22.600 + sudo kill 2108597 00:38:22.609 [Pipeline] } 00:38:22.627 [Pipeline] // stage 00:38:22.633 [Pipeline] } 00:38:22.651 [Pipeline] // timeout 00:38:22.657 [Pipeline] } 00:38:22.674 [Pipeline] // catchError 00:38:22.680 [Pipeline] } 00:38:22.698 [Pipeline] // wrap 00:38:22.704 [Pipeline] } 00:38:22.720 [Pipeline] // catchError 00:38:22.730 [Pipeline] stage 00:38:22.733 [Pipeline] { (Epilogue) 00:38:22.747 [Pipeline] catchError 00:38:22.749 [Pipeline] { 00:38:22.764 [Pipeline] echo 00:38:22.766 Cleanup processes 00:38:22.772 [Pipeline] sh 00:38:23.056 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.056 2544443 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:23.056 2544579 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.069 [Pipeline] sh 00:38:23.353 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:23.353 ++ grep -v 'sudo pgrep' 00:38:23.353 ++ awk '{print $1}' 00:38:23.353 + sudo kill -9 2544443 00:38:23.365 [Pipeline] sh 00:38:23.648 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:33.626 [Pipeline] sh 00:38:33.913 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:33.913 Artifacts sizes are good 00:38:33.928 [Pipeline] archiveArtifacts 00:38:33.935 Archiving artifacts 00:38:34.224 [Pipeline] sh 00:38:34.508 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:34.523 [Pipeline] cleanWs 00:38:34.534 [WS-CLEANUP] Deleting project workspace... 00:38:34.534 [WS-CLEANUP] Deferred wipeout is used... 00:38:34.541 [WS-CLEANUP] done 00:38:34.543 [Pipeline] } 00:38:34.564 [Pipeline] // catchError 00:38:34.577 [Pipeline] sh 00:38:34.857 + logger -p user.info -t JENKINS-CI 00:38:34.865 [Pipeline] } 00:38:34.881 [Pipeline] // stage 00:38:34.887 [Pipeline] } 00:38:34.904 [Pipeline] // node 00:38:34.909 [Pipeline] End of Pipeline 00:38:34.952 Finished: SUCCESS